Bug 1620383 - lv and vg is not getting deleted & brick is not getting unmounted when delete device operation is performed on heketi
Summary: lv and vg is not getting deleted & brick is not getting unmounted when delete...
Keywords:
Status: CLOSED DUPLICATE of bug 1573304
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: heketi
Version: cns-3.10
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: ---
Assignee: John Mulligan
QA Contact: Nitin Goyal
URL:
Whiteboard:
Depends On:
Blocks: 1568862
TreeView+ depends on / blocked
 
Reported: 2018-08-23 06:23 UTC by Nitin Goyal
Modified: 2018-08-27 11:13 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-27 11:13:10 UTC
Embargoed:


Attachments (Terms of Use)

Description Nitin Goyal 2018-08-23 06:23:52 UTC
Description of problem:
lv and vg is not getting deleted & brick is not getting unmounted after performing device delete operation on heketi.


Version-Release number of selected component (if applicable):
heketi-7.0.0-6.el7rhgs.x86_64
rhgs-volmanager-rhel7           3.4.0-3


How reproducible:


Steps to Reproduce:
1. create setup with 4 gluster node (one device on each node).
2. delete device from any node where heketidbstorage brick is present.
3. check the lvs and vgs and df.

Actual results:
lv and vg are not deleted & brick is still mounted.

Expected results:
lv and vg should be deleted and bricks should get unmount.

Additional info:

Comment 2 Nitin Goyal 2018-08-23 06:41:07 UTC
node where lv, vg, brick is still there.

Node Id: 1e0a53de87e477c8d42863b91fe23ec2
State: online
Cluster Id: b1c73f103b17ca9781c154cde82e95da
Zone: 4
Management Hostname: dhcp47-196.lab.eng.blr.redhat.com
Storage Hostname: 10.70.47.196
Tags:
  arbiter: disabled
Devices:


[root@dhcp47-105 ~]# oc get pods -o wide
NAME                                          READY     STATUS    RESTARTS   AGE       IP             NODE
glusterblock-storage-provisioner-dc-1-f24vk   1/1       Running   1          1d        10.129.0.4     dhcp47-110.lab.eng.blr.redhat.com
glusterfs-storage-44xtj                       1/1       Running   1          1d        10.70.46.144   dhcp46-144.lab.eng.blr.redhat.com
glusterfs-storage-96764                       1/1       Running   1          1d        10.70.47.110   dhcp47-110.lab.eng.blr.redhat.com
glusterfs-storage-dkr7m                       1/1       Running   0          48m       10.70.46.203   dhcp46-203.lab.eng.blr.redhat.com
glusterfs-storage-hcfrh                       1/1       Running   1          1d        10.70.47.196   dhcp47-196.lab.eng.blr.redhat.com
heketi-storage-1-fgdt4                        1/1       Running   1          1d        10.131.0.3     dhcp46-144.lab.eng.blr.redhat.com


[root@dhcp47-105 ~]# oc rsh glusterfs-storage-hcfrh

sh-4.2# df -kh
Filesystem                                                                              Size  Used Avail Use% Mounted on
/dev/dm-17                                                                               10G  321M  9.7G   4% /
devtmpfs                                                                                 24G     0   24G   0% /dev
shm                                                                                      64M     0   64M   0% /dev/shm
/dev/sdb1                                                                               100G  892M  100G   1% /run
/dev/mapper/rhel_dhcp46--180-root                                                        50G  2.2G   48G   5% /etc/ssl
tmpfs                                                                                    24G  2.4M   24G   1% /run/lvm
tmpfs                                                                                    24G     0   24G   0% /sys/fs/cgroup
tmpfs                                                                                    24G   16K   24G   1% /run/secrets/kubernetes.io/serviceaccount
/dev/mapper/vg_8ecee289ed5996e00364ceae8e5d529b-brick_92478a15773468fe51dbf2766fb6275a  2.0G   33M  2.0G   2% /var/lib/heketi/mounts/vg_8ecee289ed5996e00364ceae8e5d529b/brick_92478a15773468fe51dbf2766fb6275a


sh-4.2# lvs
  LV                                     VG                                  Attr       LSize   Pool                                Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker-pool                            docker-vg                           twi-aot---  39.79g                                            21.46  1.70                            
  home                                   rhel_dhcp46-180                     -wi-ao----  30.00g                                                                                   
  root                                   rhel_dhcp46-180                     -wi-ao----  50.00g                                                                                   
  swap                                   rhel_dhcp46-180                     -wi-a----- <15.00g                                                                                   
  brick_92478a15773468fe51dbf2766fb6275a vg_8ecee289ed5996e00364ceae8e5d529b Vwi-aotz--   2.00g tp_cfa2ac2188b6b1ff7c843235a5445534        0.70                                   
  tp_cfa2ac2188b6b1ff7c843235a5445534    vg_8ecee289ed5996e00364ceae8e5d529b twi-aotz--   2.00g                                            0.70   0.33                            


sh-4.2# vgs
  VG                                  #PV #LV #SN Attr   VSize    VFree   
  docker-vg                             1   1   0 wz--n- <100.00g   60.00g
  rhel_dhcp46-180                       1   3   0 wz--n-  <95.00g       0 
  vg_8ecee289ed5996e00364ceae8e5d529b   1   2   0 wz--n- 1000.87g <998.85g

Comment 5 John Mulligan 2018-08-23 19:10:34 UTC
Yes, this is a known issue in RHGS where brick mux leaves FDs open on the brick file system. This is the same as 1573304 in my opinion. If you agree can we mark this as a duplicate?

Comment 9 Michael Adam 2018-08-27 11:13:10 UTC

*** This bug has been marked as a duplicate of bug 1573304 ***


Note You need to log in before you can comment on or make changes to this bug.