Bug 1800703

Summary: gfapi: SEGV on calling glfs_init() after glfs_fini()
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Xavi Hernandez <jahernan>
Component: coreAssignee: Xavi Hernandez <jahernan>
Status: CLOSED ERRATA QA Contact: Arun Kumar <arukumar>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.5CC: arukumar, asriram, hchiramm, jahernan, jthottan, madam, moagrawa, msaini, pasik, pkarampu, pprakash, prasanna.kalever, rcyriac, rgeorge, rhs-bugs, rkavunga, sabose, sheggodu, skoduri, storage-qa-internal, susgupta, vamahaja, vbellur, xiubli
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.5.z Async Update   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0-30.1 Doc Type: Bug Fix
Doc Text:
Previously, applications based on gfapi such as nfs-ganesha, gluster-block or samba could malfunction or crash in some cases due to a memory corruption bug. With this update, this issue is resolved.
Story Points: ---
Clone Of: 1796628 Environment:
Last Closed: 2020-03-10 14:42:19 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1801684    
Bug Blocks: 1725716, 1746324, 1796628, 1804656    

Comment 11 Manisha Saini 2020-03-03 09:31:34 UTC
Verified this BZ with

# rpm -qa | grep ganesha
nfs-ganesha-2.7.3-9.el7rhgs.x86_64
glusterfs-ganesha-6.0-30.1.el7rhgs.x86_64
nfs-ganesha-gluster-2.7.3-9.el7rhgs.x86_64
nfs-ganesha-selinux-2.7.3-9.el7rhgs.noarch


Following scenarios were performed as part of this BZ validation-

1. Testing done around Exporting/unexporing/reexporting multiple volumes in loop 
2. Sanity check of ganesha (cthon,posix,pynfs tests) along with tests covering ganesha functionality tests
3. Ran readdir tests with multipe lookups (du -sh,find's and ls -laRt). Small file creates, deletes, lookups  and reads
4. Upgrade testing 

ocs testing should be tracked in BZ# 1796628 - gfapi: tcmu-runner receives SEGV on calling glfs_init() after glfs_fini()  as ocs testing is not in scope of validation under this bug...as it is being verified from rhgs environment point of view

Comment 12 Arun Kumar 2020-03-03 11:08:34 UTC
I have tested the fix, it is working properly, (w.r.t. comment 10 outputs are as follows)

1. Created a BHV of 105 GB using heketi
-----------------------------------------
# heketi-cli volume create --size 105 --block
Name: vol_5ca86914a3dd6fe9fb81861888b62a2a
Size: 105
Volume Id: 5ca86914a3dd6fe9fb81861888b62a2a
Cluster Id: 53376e9bf1a2cf1a3ae14b38e093b2d7
Mount: 10.70.47.189:vol_5ca86914a3dd6fe9fb81861888b62a2a
Mount Options: backup-volfile-servers=10.70.47.184,10.70.46.86
Block: true
Free Size: 102
Reserved Size: 3
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3

2. Created block PVC of 100 GB 
----------------------------------
# sh pvc-create.sh pvc1 100
persistentvolumeclaim/pvc1 created
# oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc1      Bound     pvc-647f5a6f-56e8-11ea-b9a6-005056b2c11b   100Gi      RWO            glusterfs-block   19s

3. Deleted block  PVC
----------------------------------
# oc delete pvc pvc1
persistentvolumeclaim "pvc1" deleted
[root@dhcp46-229 scripts]# oc get pvc
No resources found.

4. Again created the block PVC  
-----------------------------------
# sh pvc-create.sh pvc1 100
persistentvolumeclaim/pvc1 created
# oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc1      Bound     pvc-867ff6c4-56e8-11ea-b9a6-005056b2c11b   100Gi      RWO            glusterfs-block   15s

Comment 18 errata-xmlrpc 2020-03-10 14:42:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0778