Bug 1232641

Summary: while performing in-service software upgrade, gluster-client-xlators, glusterfs-ganesha, python-gluster package should not get installed when distributed volume up
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: buildAssignee: Bala.FA <barumuga>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.1CC: annair, barumuga, dpati, nlevinki, pprakash, rcyriac, rhs-bugs, vagarwal
Target Milestone: ---   
Target Release: RHGS 3.1.0   
Hardware: x86_64   
OS: All   
Whiteboard:
Fixed In Version: glusterfs-3.7.1-4 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1319998 (view as bug list) Environment:
Last Closed: 2015-07-29 05:05:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1202842    

Description SATHEESARAN 2015-06-17 08:39:49 UTC
Description of problem:
-----------------------
While performing upgrade, the check is done whether distributed volume is UP.
If there are distributed volume running, then the upgrading glusterfs package fails.

But now, glusterfs-client-xlators package is getting installed, even when the distributed volume is up

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
glusterfs-3.7.1-3.el6rhs

How reproducible:
-----------------
Always/consistent

Steps to Reproduce:
-------------------
1. Create and start the distribute volume with RHGS 3.0.4 
2. Try to update gluster rpms to RHGS 3.1 ( glusterfs-3.7.1-3.el6rhs )

Actual results:
---------------
glusterfs-client-xlators package is getting installed, while other gluster rpms are abandoned to install

Expected results:
-----------------
gluster core packages should not get installed or upgraded when distributed volume is up

Comment 1 SATHEESARAN 2015-06-17 08:43:58 UTC
There was a similar bug with RHGS 3.0.4,where glusterfs-geo-replication and glusterfs-cli packages are getting updated, when distributed volume is up. This issue also had a customer case attached to it.
This issue is tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1211839 and was resolved with the latest glusterfs build ( glusterfs-3.7.1-3.el6rhs )

Comment 2 SATHEESARAN 2015-06-17 15:27:48 UTC
As per Bala's email there are 3 packages that are getting affected - When performing in-service software update, when distributed volume is up and those are,
1. glusterfs-ganesha
2. glusterfs-client-xlators
3. python-gluster

Changing the bug summary accordingly

Comment 3 SATHEESARAN 2015-06-17 15:28:52 UTC
Adding the comment from Niels on that mail thread

<snip>

Now, all sub-packages of the glusterfs src.rpm will need the %pretrans
scripts. If a package does not have the script, it might get updated
while Gluster processes are running. This is not a problem, until the
processes get restarted and different versions of libraries are
expected. Likely no immediate errors, but hard to debug unexpected
behaviour could be the result.

Maybe the %pretrans is not needed for python-gluster, but it should be
required for the others. Any sub-package that has a versioned dependency
on any of the glusterfs packages needs the %pretrans script.

</snip>

Comment 4 Bala.FA 2015-06-17 16:15:16 UTC
Patch is under review at https://code.engineering.redhat.com/gerrit/50967

Comment 7 SATHEESARAN 2015-06-19 11:00:49 UTC
Tested with glusterfs-3.7.1-4.el6rhs,

Tried to update glusterfs rpms ( from RHGS 3.0.4 to RHGS 3.1 ), with distributed volume as well as without stopping the brick process. On both this occasion, gluster core packages ( glusterfs-*, glusterfs-client-xlators, glusterfs-ganesha ) are not installed/updated.

Marking this bug as VERIFIED

Comment 8 errata-xmlrpc 2015-07-29 05:05:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html