Bug 1800703 - gfapi: SEGV on calling glfs_init() after glfs_fini()
Summary: gfapi: SEGV on calling glfs_init() after glfs_fini()
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Async Update
Assignee: Xavi Hernandez
QA Contact: Arun Kumar
URL:
Whiteboard:
Depends On: 1801684
Blocks: 1725716 1746324 1796628 1804656
TreeView+ depends on / blocked
 
Reported: 2020-02-07 18:16 UTC by Xavi Hernandez
Modified: 2020-03-10 14:42 UTC (History)
24 users (show)

Fixed In Version: glusterfs-6.0-30.1
Doc Type: Bug Fix
Doc Text:
Previously, applications based on gfapi such as nfs-ganesha, gluster-block or samba could malfunction or crash in some cases due to a memory corruption bug. With this update, this issue is resolved.
Clone Of: 1796628
Environment:
Last Closed: 2020-03-10 14:42:19 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0778 0 None None None 2020-03-10 14:42:31 UTC

Comment 11 Manisha Saini 2020-03-03 09:31:34 UTC
Verified this BZ with

# rpm -qa | grep ganesha
nfs-ganesha-2.7.3-9.el7rhgs.x86_64
glusterfs-ganesha-6.0-30.1.el7rhgs.x86_64
nfs-ganesha-gluster-2.7.3-9.el7rhgs.x86_64
nfs-ganesha-selinux-2.7.3-9.el7rhgs.noarch


Following scenarios were performed as part of this BZ validation-

1. Testing done around Exporting/unexporing/reexporting multiple volumes in loop 
2. Sanity check of ganesha (cthon,posix,pynfs tests) along with tests covering ganesha functionality tests
3. Ran readdir tests with multipe lookups (du -sh,find's and ls -laRt). Small file creates, deletes, lookups  and reads
4. Upgrade testing 

ocs testing should be tracked in BZ# 1796628 - gfapi: tcmu-runner receives SEGV on calling glfs_init() after glfs_fini()  as ocs testing is not in scope of validation under this bug...as it is being verified from rhgs environment point of view

Comment 12 Arun Kumar 2020-03-03 11:08:34 UTC
I have tested the fix, it is working properly, (w.r.t. comment 10 outputs are as follows)

1. Created a BHV of 105 GB using heketi
-----------------------------------------
# heketi-cli volume create --size 105 --block
Name: vol_5ca86914a3dd6fe9fb81861888b62a2a
Size: 105
Volume Id: 5ca86914a3dd6fe9fb81861888b62a2a
Cluster Id: 53376e9bf1a2cf1a3ae14b38e093b2d7
Mount: 10.70.47.189:vol_5ca86914a3dd6fe9fb81861888b62a2a
Mount Options: backup-volfile-servers=10.70.47.184,10.70.46.86
Block: true
Free Size: 102
Reserved Size: 3
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3

2. Created block PVC of 100 GB 
----------------------------------
# sh pvc-create.sh pvc1 100
persistentvolumeclaim/pvc1 created
# oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc1      Bound     pvc-647f5a6f-56e8-11ea-b9a6-005056b2c11b   100Gi      RWO            glusterfs-block   19s

3. Deleted block  PVC
----------------------------------
# oc delete pvc pvc1
persistentvolumeclaim "pvc1" deleted
[root@dhcp46-229 scripts]# oc get pvc
No resources found.

4. Again created the block PVC  
-----------------------------------
# sh pvc-create.sh pvc1 100
persistentvolumeclaim/pvc1 created
# oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc1      Bound     pvc-867ff6c4-56e8-11ea-b9a6-005056b2c11b   100Gi      RWO            glusterfs-block   15s

Comment 18 errata-xmlrpc 2020-03-10 14:42:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0778


Note You need to log in before you can comment on or make changes to this bug.