Bug 1598890 - [Tracker-RHGS-BZ#1628651] Deleting 50 file volumes succeeded but 1 volume did not get deleted.
Summary: [Tracker-RHGS-BZ#1628651] Deleting 50 file volumes succeeded but 1 volume did...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhgs-server-container
Version: cns-3.10
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Saravanakumar
QA Contact: vinutha
URL:
Whiteboard:
Depends On: 1628651
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-06 18:30 UTC by vinutha
Modified: 2018-10-10 03:01 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-09 16:16:47 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description vinutha 2018-07-06 18:30:02 UTC
Creating this new based on dev comment #13 in bug 1584639 https://bugzilla.redhat.com/show_bug.cgi?id=1584639


Description of problem:
After deletion of 50 file volumes of 25GB each from heketi. It is listed in gluster volume list but a query on the volume info fails with 'Volume does not exist'. Heketi topology info and gluster shows an undeleted volume. 

Version-Release number of selected component (if applicable):
# oc rsh heketi-storage-1-s4vdx 
sh-4.2# rpm -qa | grep heketi
python-heketi-7.0.0-2.el7rhgs.x86_64
heketi-client-7.0.0-2.el7rhgs.x86_64
heketi-7.0.0-2.el7rhgs.x86_64

# oc rsh glusterfs-storage-n8kpc 
sh-4.2# rpm -qa | grep gluster
glusterfs-client-xlators-3.8.4-54.12.el7rhgs.x86_64
glusterfs-fuse-3.8.4-54.12.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-54.12.el7rhgs.x86_64
glusterfs-libs-3.8.4-54.12.el7rhgs.x86_64
glusterfs-3.8.4-54.12.el7rhgs.x86_64
glusterfs-api-3.8.4-54.12.el7rhgs.x86_64
glusterfs-cli-3.8.4-54.12.el7rhgs.x86_64
glusterfs-server-3.8.4-54.12.el7rhgs.x86_64
gluster-block-0.2.1-20.el7rhgs.x86_64

# rpm -qa | grep openshift
atomic-openshift-clients-3.10.0-0.67.0.git.0.ccd325f.el7.x86_64
openshift-ansible-roles-3.10.0-0.67.0.git.107.1bd1f01.el7.noarch
atomic-openshift-docker-excluder-3.10.0-0.67.0.git.0.ccd325f.el7.noarch
atomic-openshift-excluder-3.10.0-0.67.0.git.0.ccd325f.el7.noarch
atomic-openshift-3.10.0-0.67.0.git.0.ccd325f.el7.x86_64
openshift-ansible-docs-3.10.0-0.67.0.git.107.1bd1f01.el7.noarch
openshift-ansible-3.10.0-0.67.0.git.107.1bd1f01.el7.noarch
atomic-openshift-hyperkube-3.10.0-0.67.0.git.0.ccd325f.el7.x86_64
atomic-openshift-node-3.10.0-0.67.0.git.0.ccd325f.el7.x86_64
openshift-ansible-playbooks-3.10.0-0.67.0.git.107.1bd1f01.el7.noarch

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
1 volume deletion failed

Expected results:
All volumes should be deleted 

Additional info:
will attach logs


Note You need to log in before you can comment on or make changes to this bug.