Bug 1465843 - PV not getting deleted on deletion of PVC as gluster volume not getting stopped
Summary: PV not getting deleted on deletion of PVC as gluster volume not getting stopped
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: heketi
Version: cns-3.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Raghavendra Talur
QA Contact: Anoop
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-28 10:56 UTC by Shekhar Berry
Modified: 2017-08-02 14:15 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-02 14:15:44 UTC
Embargoed:


Attachments (Terms of Use)

Description Shekhar Berry 2017-06-28 10:56:29 UTC
Description of problem:

In my OCP setup persitent storage is provided by CNS. I deleted application POD which had gluster volume mounted inside them.
I then deleted the PVC with expectation that PV corresponding to that PVC will also be deleted.

But PV does not get deleted and enters a failed state. Here are the steps:

Step 1: oc delete pvc pvc02qg2f278h
PVC successfully deleted

Step 2: oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                 REASON    AGE
pvc-9260b13c-518e-11e7-b27a-782bcb736d36   2Gi        RWO           Delete          Failed    fio19/pvc02qg2f278h             13d

You see above the PV goes to failed state

oc describe pv pvc-9260b13c-518e-11e7-b27a-782bcb736d36
Name:		pvc-9260b13c-518e-11e7-b27a-782bcb736d36
Labels:		<none>
StorageClass:	cnsclass
Status:		Failed
Claim:		fio19/pvc02qg2f278h
Reclaim Policy:	Delete
Access Modes:	RWO
Capacity:	2Gi
Message:	Unable to delete volume vol_f59ef12318741cfe2cce1b961fd7dae2: Unable to execute command on glusterfs-hpn5t: volume delete: vol_f59ef12318741cfe2cce1b961fd7dae2: failed: Staging failed on 10.16.153.63. Error: Volume vol_f59ef12318741cfe2cce1b961fd7dae2 has been started.Volume needs to be stopped before deletion.
Source:
    Type:		Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:	glusterfs-dynamic-pvc02qg2f278h
    Path:		vol_f59ef12318741cfe2cce1b961fd7dae2
    ReadOnly:		false
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  17m		17m		1	{persistentvolume-controller }			Warning		VolumeFailedDelete	Unable to delete volume vol_f59ef12318741cfe2cce1b961fd7dae2: Unable to execute command on glusterfs-hpn5t: volume delete: vol_f59ef12318741cfe2cce1b961fd7dae2: failed: Staging failed on 10.16.153.63. Error: Volume vol_f59ef12318741cfe2cce1b961fd7dae2 has been started.Volume needs to be stopped before deletion.


Version-Release number of selected component (if applicable):

rpm -qa | grep gluster
glusterfs-client-xlators-3.8.4-27.el7rhgs.x86_64
glusterfs-cli-3.8.4-27.el7rhgs.x86_64
glusterfs-server-3.8.4-27.el7rhgs.x86_64
glusterfs-libs-3.8.4-27.el7rhgs.x86_64
glusterfs-3.8.4-27.el7rhgs.x86_64
glusterfs-api-3.8.4-27.el7rhgs.x86_64
glusterfs-fuse-3.8.4-27.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-27.el7rhgs.x86_64
gluster-block-0.2-3.el7rhgs.x86_64


rpm -qa | grep heketi
heketi-client-5.0.0-1.el7rhgs.x86_64
python-heketi-5.0.0-1.el7rhgs.x86_64
heketi-5.0.0-1.el7rhgs.x86_64

rpm -qa | grep cns
cns-deploy-5.0.0-2.el7rhgs.x86_64


oc version
oc v3.5.5.20
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://gprfs013.sbu.lab.eng.bos.redhat.com:8443
openshift v3.5.5.20
kubernetes v1.5.2+43a9be4

docker version
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-31.1.git97ba2c0.el7.x86_64
 Go version:      go1.8
 Git commit:      97ba2c0/1.12.6
 Built:           Fri May 26 16:26:51 2017
 OS/Arch:         linux/amd64

Server:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-31.1.git97ba2c0.el7.x86_64
 Go version:      go1.8
 Git commit:      97ba2c0/1.12.6
 Built:           Fri May 26 16:26:51 2017
 OS/Arch:         linux/amd64



How reproducible:

Seen this twice in my setup


Additional info:


Heketi Log: http://perf1.perf.lab.eng.bos.redhat.com/pub/shberry/heketi_log/heketi.log

Glusterd Log from Node: http://perf1.perf.lab.eng.bos.redhat.com/pub/shberry/gluster_log/glusterd.log

Glustershd Log from Node: http://perf1.perf.lab.eng.bos.redhat.com/pub/shberry/gluster_log/glustershd.log

Comment 2 Shekhar Berry 2017-06-28 10:59:00 UTC
I forgot to mention that Brick Multiplex was enabled for the above scenario.

Comment 3 Humble Chirammal 2017-08-02 14:15:44 UTC
This issue has not seen in latest builds. I am closing this bug and please feel free to reopen if this issue occurs again,.


Note You need to log in before you can comment on or make changes to this bug.