Bug 1479777 - Return proper status to caller when volume delete is attempted.
Return proper status to caller when volume delete is attempted.
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: heketi (Show other bugs)
cns-3.6
Unspecified Unspecified
unspecified Severity medium
: ---
: CNS 3.6
Assigned To: Raghavendra Talur
krishnaram Karthick
:
: 1475701 1480228 (view as bug list)
Depends On:
Blocks: 1445448 1481619
  Show dependency treegraph
 
Reported: 2017-08-09 07:37 EDT by Humble Chirammal
Modified: 2017-10-11 03:09 EDT (History)
8 users (show)

See Also:
Fixed In Version: heketi-5.0.0-11 rhgs-volmanager-docker-5.0.0-13
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-10-11 03:09:46 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Humble Chirammal 2017-08-09 07:37:33 EDT
Description of problem:

Aug 08 15:21:10 node2 systemd[1]: Starting GlusterFS, a clustered file-system server...
Aug 08 15:21:13 node2 systemd[1]: Started GlusterFS, a clustered file-system server.
[negroni] Started GET /queue/5c0f35addb8f5c292658d95a2428a650
[negroni] Completed 200 OK in 55.477µs
[negroni] Started GET /queue/5c0f35addb8f5c292658d95a2428a650
[negroni] Completed 200 OK in 50.448µs
[kubeexec] DEBUG 2017/08/08 18:12:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:250: Host: 192.168.35.3 Pod: glusterfs-m7whg Command: gluster-block delete vol_96d8648ebd2ff7dcb87be3a2587c6246/blockvol_abe05e7c0c3389d118b6607f1e582b7d
Result: SUCCESSFUL ON:   192.168.35.3
RESULT: SUCCESS
[asynchttp] INFO 2017/08/08 18:12:29 asynchttp.go:129: Completed job 5c0f35addb8f5c292658d95a2428a650 in 2.262625428s
[sshexec] ERROR 2017/08/08 18:12:29 /src/github.com/heketi/heketi/executors/sshexec/block_volume.go:108: Unable to get the block volume delete info for block volume blockvol_abe05e7c0c3389d118b6607f1e582b7d
[heketi] ERROR 2017/08/08 18:12:29 /src/github.com/heketi/heketi/apps/glusterfs/block_volume_entry.go:387: Unable to delete volume: Unable to get the block volume delete info for block volume blockvol_abe05e7c0c3389d118b6607f1e582b7d
[heketi] ERROR 2017/08/08 18:12:29 /src/github.com/heketi/heketi/apps/glusterfs/app_block_volume.go:189: Failed to delete volume abe05e7c0c3389d118b6607f1e582b7d: Unable to get the block volume delete info for block volume blockvol_abe05e7c0c3389d118b6607f1e582b7d
[negroni] Started GET /queue/5c0f35addb8f5c292658d95a2428a650
[negroni] Completed 500 Internal Server Error in 114.226µs



Eventhough the opertion is success, the return was an error.
Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 5 Mohamed Ashiq 2017-08-11 17:15:15 EDT
*** Bug 1480228 has been marked as a duplicate of this bug. ***
Comment 8 krishnaram Karthick 2017-08-25 00:10:30 EDT
This issue is still seen in the following build.

heketi-client-5.0.0-9.el7rhgs.x86_64
image: rhgs3/rhgs-volmanager-rhel7:3.3.0-11

[kubeexec] DEBUG 2017/08/25 03:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:250: Host: dhcp46-203.lab.eng.blr.redhat.com Pod: glusterfs-4wpxm Command: gluster-block create vol_54eafa3f54fc5
c08a4f5ac4ba9ed9b50/blockvol_58039729d94789d4c223b581ce5e1be5  ha 3 auth enable  10.70.46.197,10.70.46.199,10.70.46.203 1G --json
Result: { "IQN": "iqn.2016-12.org.gluster-block:11263aeb-176a-4901-9e32-8ef68b117320", "USERNAME": "11263aeb-176a-4901-9e32-8ef68b117320", "PASSWORD": "1461f8b7-8837-4c92-aabf-0cdcb170d24a", "PORTAL(S)": [ "10.70.46.197:3260", "10.70.46.199:3260", "10.70.46.203:3260" ], "RESULT": "SUCCESS" }
[heketi] INFO 2017/08/25 03:37:05 Created block volume 58039729d94789d4c223b581ce5e1be5
[asynchttp] INFO 2017/08/25 03:37:05 asynchttp.go:129: Completed job ed37bc9907f4abb639fa06165c4610a0 in 3.94499584s
[negroni] Started GET /queue/ed37bc9907f4abb639fa06165c4610a0
[negroni] Completed 303 See Other in 66.617µs
[negroni] Started GET /blockvolumes/58039729d94789d4c223b581ce5e1be5
[negroni] Completed 200 OK in 1.988713ms
[negroni] Started DELETE /blockvolumes/58039729d94789d4c223b581ce5e1be5
[negroni] Completed 202 Accepted in 456.162µs
[asynchttp] INFO 2017/08/25 03:51:15 asynchttp.go:125: Started job 6a801482a995e17c16f6258c01c3b3e9
[heketi] INFO 2017/08/25 03:51:15 Destroying volume 58039729d94789d4c223b581ce5e1be5
[negroni] Started GET /queue/6a801482a995e17c16f6258c01c3b3e9
[negroni] Completed 200 OK in 39.735µs
[sshexec] INFO 2017/08/25 03:51:15 Check Glusterd service status in node dhcp46-203.lab.eng.blr.redhat.com
[kubeexec] DEBUG 2017/08/25 03:51:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:250: Host: dhcp46-203.lab.eng.blr.redhat.com Pod: glusterfs-4wpxm Command: systemctl status glusterd
Result: ● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-08-25 01:05:03 UTC; 2h 46min ago
  Process: 941 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 942 (glusterd)
   CGroup: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24bd9d8c_8931_11e7_a3ad_005056b32785.slice/docker-ede022f10098403b7cf8d4a29d4b9321701eda75dc2cb4097029e1e6d5364f27.scope/system.slice/glusterd.service
           ├─ 942 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
           ├─1378 /usr/sbin/glusterfsd -s 10.70.46.203 --volfile-id heketidbstorage.10.70.46.203.var-lib-heketi-mounts-vg_94677a02975a69c20d824ea7baa033f4-brick_a5d656793dc1bd5ea08d473d89853802-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.203-var-lib-heketi-mounts-vg_94677a02975a69c20d824ea7baa033f4-brick_a5d656793dc1bd5ea08d473d89853802-brick.pid -S /var/run/gluster/1b9da39f021059bc8411046d93034faa.socket --brick-name /var/lib/heketi/mounts/vg_94677a02975a69c20d824ea7baa033f4/brick_a5d656793dc1bd5ea08d473d89853802/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_94677a02975a69c20d824ea7baa033f4-brick_a5d656793dc1bd5ea08d473d89853802-brick.log --xlator-option *-posix.glusterd-uuid=6d3e55f3-d852-49b6-853c-e32b5f474e97 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152
           └─5843 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/82f1921117e36bcc6f475e7a27223161.socket --xlator-option *replicate*.node-uuid=6d3e55f3-d852-49b6-853c-e32b5f474e97
Aug 25 01:05:01 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server...
Aug 25 01:05:03 dhcp46-203.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server.
[negroni] Started GET /queue/6a801482a995e17c16f6258c01c3b3e9
[negroni] Completed 200 OK in 89.741µs
[kubeexec] DEBUG 2017/08/25 03:51:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:250: Host: dhcp46-203.lab.eng.blr.redhat.com Pod: glusterfs-4wpxm Command: gluster-block delete vol_54eafa3f54fc5c08a4f5ac4ba9ed9b50/blockvol_58039729d94789d4c223b581ce5e1be5
Result: SUCCESSFUL ON:   10.70.46.197 10.70.46.199 10.70.46.203
RESULT: SUCCESS
[asynchttp] INFO 2017/08/25 03:51:16 asynchttp.go:129: Completed job 6a801482a995e17c16f6258c01c3b3e9 in 1.379759245s
[sshexec] ERROR 2017/08/25 03:51:16 /src/github.com/heketi/heketi/executors/sshexec/block_volume.go:109: Unable to get the block volume delete info for block volume blockvol_58039729d94789d4c223b581ce5e1be5
[heketi] ERROR 2017/08/25 03:51:16 /src/github.com/heketi/heketi/apps/glusterfs/block_volume_entry.go:387: Unable to delete volume: Unable to get the block volume delete info for block volume blockvol_58039729d94789d4c223b581ce5e1be5
[heketi] ERROR 2017/08/25 03:51:16 /src/github.com/heketi/heketi/apps/glusterfs/app_block_volume.go:189: Failed to delete volume 58039729d94789d4c223b581ce5e1be5: Unable to get the block volume delete info for block volume blockvol_58039729d94789d4c223b581ce5e1be5
[negroni] Started GET /queue/6a801482a995e17c16f6258c01c3b3e9

Since the blockvolume entry is not deleted in heketi's db, it keeps trying to delete the volume and fails.
Comment 9 Humble Chirammal 2017-08-29 07:08:11 EDT
"FAILED QA"
Comment 12 Humble Chirammal 2017-09-07 09:58:55 EDT
*** Bug 1475701 has been marked as a duplicate of this bug. ***
Comment 14 Humble Chirammal 2017-09-11 07:32:31 EDT
(In reply to Raghavendra Talur from comment #13)
> posted patch at
> https://github.com/obnoxxx-collab/heketi/commit/
> 059c59a3261190dc507801587cb8668a3dc393be

Thanks Talur!
Comment 15 Michael Adam 2017-09-11 17:15:09 EDT
(In reply to Raghavendra Talur from comment #13)
> posted patch at
> https://github.com/obnoxxx-collab/heketi/commit/
> 059c59a3261190dc507801587cb8668a3dc393be

Good catch, Talur, thanks!
Comment 16 krishnaram Karthick 2017-09-13 03:12:02 EDT
Verified in build - cns-deploy-5.0.0-37.el7rhgs.x86_64

[root@dhcp47-57 ~]# oc create -f claim.yaml
persistentvolumeclaim "claimheketi" created
[root@dhcp47-57 ~]# oc get pvc
NAME          STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
claimheketi   Bound     pvc-3e861edc-9851-11e7-9442-005056a56b97   5Gi        RWO           glusterblock   1m

[root@dhcp47-57 ~]# heketi-cli blockvolume list
Id:04b87b384a4fbb0dfd9b661d3e5a4332    Cluster:b82565f19d1fe2413047ef76289bf2c9    Name:blockvol_04b87b384a4fbb0dfd9b661d3e5a4332
[root@dhcp47-57 ~]# 
[root@dhcp47-57 ~]# 
[root@dhcp47-57 ~]# oc delete pvc/claimheketi
persistentvolumeclaim "claimheketi" deleted
[root@dhcp47-57 ~]# oc get pvc
No resources found.
[root@dhcp47-57 ~]# oc get pv
No resources found.
[root@dhcp47-57 ~]# 

[root@dhcp47-57 ~]# oc rsh glusterfs-30vb6
sh-4.2# 
sh-4.2# gluster-block list vol_13842eb9112def6cc941897316166f6b
*Nil*
Comment 17 errata-xmlrpc 2017-10-11 03:09:46 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2879

Note You need to log in before you can comment on or make changes to this bug.