Bug 1372038

Summary: RHEV-H does not include Gluster packages that support RHGS 3.1.3
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Gordon Watson <gwatson>
Component: distributionAssignee: Sreenath G <sgirijan>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.0CC: fdeutsch, gklein, lsurette, mkalinin, pstehlik, rhs-bugs, sasundar, srevivo, storage-qa-internal, ycui, ykaul
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1373661 (view as bug list) Environment:
Last Closed: 2016-10-21 15:27:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1373661, 1373663    

Description Gordon Watson 2016-08-31 18:33:13 UTC
Description of problem:

The Red Hat Storage admin Guide states that if the server is running RHGS 3.1.3, then the client also has to be running 3.1.3;

   https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Accessing_Data_-_Setting_Up_Clients.html#sect-Native_Client 


Currently, the latest RHEV-H for 3.6 and also the RHVH 4.0 one still only provide 'glusterfs-*-3.7.1-16.el7' rpms, i.e. 3.1.1.




Version-Release number of selected component (if applicable):

RHEV-H 7.2 for 3.6


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

See subsequent comment for main reason for requesting this.

Comment 2 Yaniv Kaul 2016-08-31 18:56:30 UTC
Looks like we need to sync with latest downstream Gluster client bits on every RHVH z-stream.

Comment 9 Marina Kalinin 2016-10-21 15:27:08 UTC
Closing this bug, since the 2 related RHEV and RHV bugs are closed as well.
Gordon, if incorrect, please reopen.