Bug 1096139 - [SNAPSHOT]: Delete is successful and device is not present but df -h lists it
Summary: [SNAPSHOT]: Delete is successful and device is not present but df -h lists it
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: Vijaikumar Mallikarjuna
QA Contact: Rahul Hinduja
URL:
Whiteboard: SNAPSHOT
Depends On:
Blocks: 1098084
TreeView+ depends on / blocked
 
Reported: 2014-05-09 09:39 UTC by Rahul Hinduja
Modified: 2016-09-17 13:05 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.6.0.15-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1098084 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:37:02 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Rahul Hinduja 2014-05-09 09:39:50 UTC
Description of problem:
=======================

When a snapshot is successfully deleted, the corresponding device is removed but df -h list the respective device still mounted.

Create a snapshot of a volume and list it: 

[root@snapshot09 ~]# gluster snapshot list vol1
ra1

[root@snapshot09 ~]# df -h | grep /var/run | wc
    257    1285   27499

Delete a snapshot of a volume vol1:

[root@snapshot09 ~]# gluster snapshot delete ra1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: ra1: snap removed successfully
[root@snapshot09 ~]#

[root@snapshot09 ~]# gluster snapshot list vol1
No snapshots present

Still the number of devices mounted are same: 

[root@snapshot09 ~]# df -h | grep /var/run | wc
    257    1285   27499
[root@snapshot09 ~]# 

[root@snapshot09 ~]# df -h | grep /var/run/gluster/snaps/b4207e6cafca4861880455dc97d66046
                       44G  2.0G   40G   5% /var/run/gluster/snaps/b4207e6cafca4861880455dc97d66046/brick1
[root@snapshot09 ~]# 


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.6.0-1.0.el6rhs.x86_64

How reproducible:
=================
1/1


Steps to Reproduce:
===================
1. Create and start a volume
2. Create a snapshot of a volume
3. Check the device mounted on system using (df -h)
4. Delete a snapshot of a volume
5. List a snapshot of volume, it should not be present
6. Check the device mounted on system using (df -h)

Actual results:
===============

df -h shows the device mounted

[root@snapshot09 ~]# df -h | grep b4207e6cafca4861880455dc97d66046
/dev/mapper/VolGroup0-b4207e6cafca4861880455dc97d66046
                       44G  2.0G   40G   5% /var/run/gluster/snaps/b4207e6cafca4861880455dc97d66046/brick1
[root@snapshot09 ~]# 

But device is not available: 

[root@snapshot09 ~]# ls /dev/mapper/VolGroup0-b4207e6cafca4861880455dc97d66046
ls: cannot access /dev/mapper/VolGroup0-b4207e6cafca4861880455dc97d66046: No such file or directory
[root@snapshot09 ~]# 


Expected results:
=================

df -h should not list device mounted.


Additional info:
================

Log snippet:
============
[2014-05-09 17:06:38.903632] I [glusterd-handler.c:1367:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
[2014-05-09 17:06:55.452365] W [glusterd-utils.c:1558:glusterd_snap_volinfo_find] 0-management: Snap volume b4207e6cafca4861880455dc97d66046.snapshot09.lab.eng.blr.redhat.com.var-run-gluster-snaps-b4207e6cafca4861880455dc97d66046-brick1-b1 not found
[2014-05-09 17:06:55.486553] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick /var/run/gluster/snaps/b4207e6cafca4861880455dc97d66046/brick1/b1 on port 49413
[2014-05-09 17:06:55.504212] I [rpc-clnt.c:973:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2014-05-09 17:06:55.504295] I [rpc-clnt.c:988:rpc_clnt_connection_init] 0-management: defaulting ping-timeout to 30secs
[2014-05-09 17:07:42.438276] E [rpc-transport.c:481:rpc_transport_unref] (-->/usr/lib64/glusterfs/3.6.0/xlator/mgmt/glusterd.so(glusterd_brick_disconnect+0x38) [0x7f457c2258d8] (-->/usr/lib64/glusterfs/3.6.0/xlator/mgmt/glusterd.so(glusterd_rpc_clnt_unref+0x35) [0x7f457c225795] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_unref+0x63) [0x7f458630c5f3]))) 0-rpc_transport: invalid argument: this
[2014-05-09 17:07:42.441894] I [MSGID: 106005] [glusterd-handler.c:4126:__glusterd_brick_rpc_notify] 0-management: Brick snapshot09.lab.eng.blr.redhat.com:/var/run/gluster/snaps/b4207e6cafca4861880455dc97d66046/brick1/b1 has disconnected from glusterd.
[2014-05-09 17:07:42.442209] E [glusterd-utils.c:1939:glusterd_brick_unlink_socket_file] 0-management: Failed to remove /var/run/6de21d681df183dba2575e8c8a5ecb07.socket error: Permission denied
[2014-05-09 17:07:43.969050] I [glusterd-pmap.c:271:pmap_registry_remove] 0-pmap: removing brick /var/run/gluster/snaps/b4207e6cafca4861880455dc97d66046/brick1/b1 on port 49413
[2014-05-09 17:07:43.986473] W [socket.c:522:__socket_rwv] 0-socket.management: writev on 10.70.44.62:1022 failed (Broken pipe)
[2014-05-09 17:07:43.986573] I [socket.c:2239:socket_event_handler] 0-transport: disconnecting now

Comment 2 Vijaikumar Mallikarjuna 2014-05-13 08:16:28 UTC
Patch http://review.gluster.org/#/c/7581/ fixes this issue.

Comment 3 senaik 2014-05-14 13:28:50 UTC
Version : glusterfs-server-3.6.0.0-1.el6rhs.x86_64
========

Able to reproduce the issue with the same steps as mentioned in 'Steps to reproduce'

1) create a snapshot of volume 

gluster snapshot create snap1 vol0
snapshot create: success: Snap snap1 created successfully

2)Check df -h 

[root@snapshot01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_snapshot01-lv_root
                       44G  2.0G   40G   5% /
tmpfs                 4.0G     0  4.0G   0% /dev/shm
/dev/vda1             485M   34M  426M   8% /boot
/dev/mapper/VolGroup0-thin_vol0
                      100G  169M  100G   1% /brick0
/dev/mapper/VolGroup0-thin_vol1
                      100G   33M  100G   1% /brick1
/dev/mapper/VolGroup0-thin_vol2
                      100G   33M  100G   1% /brick2
/dev/mapper/VolGroup0-thin_vol3
                      100G   33M  100G   1% /brick3
/dev/mapper/VolGroup1-thin_vol4
                      100G   33M  100G   1% /brick4
/dev/mapper/VolGroup1-thin_vol5
                      100G   33M  100G   1% /brick5
/dev/mapper/VolGroup1-thin_vol6
                      100G   33M  100G   1% /brick6
/dev/mapper/VolGroup1-thin_vol7
                      100G   33M  100G   1% /brick7
/dev/mapper/VolGroup0-3bca2cbe0b9c40fcb1d39155c8e4d36b_0
                      100G  169M  100G   1% /var/run/gluster/snaps/3bca2cbe0b9c40fcb1d39155c8e4d36b/brick1

3) List the snapshot 
[root@snapshot01 ~]# gluster snapshot list
snap1

4) delete snapshot and list snapshots again

 gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: snap1: snap removed successfully
[root@snapshot01 ~]# gluster snapshot list
No snapshots present

5) df -h shows it is still mounted 

df -h |grep 3bca2cbe0b9c40fcb1d39155c8e4d36b
/dev/mapper/VolGroup0-3bca2cbe0b9c40fcb1d39155c8e4d36b_0
                       44G  2.0G   40G   5% /var/run/gluster/snaps/3bca2cbe0b9c40fcb1d39155c8e4d36b/brick1

[root@snapshot01 ~]# ls /dev/mapper/VolGroup0-3bca2cbe0b9c40fcb1d39155c8e4d36b_0
ls: cannot access /dev/mapper/VolGroup0-3bca2cbe0b9c40fcb1d39155c8e4d36b_0: No such file or directory


Moving the bug back to 'Assigned'

Comment 4 Vijaikumar Mallikarjuna 2014-05-15 08:47:55 UTC
Recently we used system call 'umount2' instead of running external command umount for doing umount operation.

Looks like umount2 is not cleaning up /etc/mtab entry after doing an unmount.

Comment 5 Vijaikumar Mallikarjuna 2014-05-15 09:44:58 UTC
Patch# 7775 posted upstream

Comment 6 Vijaikumar Mallikarjuna 2014-05-30 08:06:44 UTC
Patch https://code.engineering.redhat.com/gerrit/26012 posted

Comment 7 Rahul Hinduja 2014-06-10 14:54:55 UTC
Verified with build: glusterfs-3.6.0.15-1.el6rhs.x86_64

Before snapshot is created:
===========================

[root@inception ~]# df -h 
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_inception-lv_root
                       50G  6.4G   41G  14% /
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdm1             485M   34M  426M   8% /boot
/dev/mapper/vg_inception-lv_home
                      394G  199M  374G   1% /home
/dev/mapper/VolGroup0-thin_vol0
                      100G   33M  100G   1% /brick0
/dev/mapper/VolGroup0-thin_vol1
                      100G   33M  100G   1% /brick1
/dev/mapper/VolGroup0-thin_vol2
                      100G   33M  100G   1% /brick2
/dev/mapper/VolGroup0-thin_vol3
                      100G   33M  100G   1% /brick3
/dev/mapper/VolGroup1-thin_vol4
                      100G   33M  100G   1% /brick4
/dev/mapper/VolGroup1-thin_vol5
                      100G   33M  100G   1% /brick5
/dev/mapper/VolGroup1-thin_vol6
                      100G   33M  100G   1% /brick6
/dev/mapper/VolGroup1-thin_vol7
                      100G   33M  100G   1% /brick7
[root@inception ~]# 


Snapshot is created:
====================
[root@inception ~]# gluster snapshot create snap1 vol0
snapshot create: success: Snap snap1 created successfully
[root@inception ~]# df -h 
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_inception-lv_root
                       50G  6.4G   41G  14% /
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdm1             485M   34M  426M   8% /boot
/dev/mapper/vg_inception-lv_home
                      394G  199M  374G   1% /home
/dev/mapper/VolGroup0-thin_vol0
                      100G   33M  100G   1% /brick0
/dev/mapper/VolGroup0-thin_vol1
                      100G   33M  100G   1% /brick1
/dev/mapper/VolGroup0-thin_vol2
                      100G   33M  100G   1% /brick2
/dev/mapper/VolGroup0-thin_vol3
                      100G   33M  100G   1% /brick3
/dev/mapper/VolGroup1-thin_vol4
                      100G   33M  100G   1% /brick4
/dev/mapper/VolGroup1-thin_vol5
                      100G   33M  100G   1% /brick5
/dev/mapper/VolGroup1-thin_vol6
                      100G   33M  100G   1% /brick6
/dev/mapper/VolGroup1-thin_vol7
                      100G   33M  100G   1% /brick7
/dev/mapper/VolGroup0-4c2d89772b0c4d1d99e63112ef0f805e_0
                      100G   33M  100G   1% /var/run/gluster/snaps/4c2d89772b0c4d1d99e63112ef0f805e/brick1
[root@inception ~]# ls /var/run/gluster/snaps/4c2d89772b0c4d1d99e63112ef0f805e/brick1
[root@inception ~]# 


Snapshot is deleted:
=====================

[root@inception ~]# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: snap1: snap removed successfully
[root@inception ~]# gluster snapshot list
No snapshots present
[root@inception ~]# gluster snapshot info 
No snapshots present
[root@inception ~]#
[root@inception ~]# ls  /var/run/gluster/snaps/4c2d89772b0c4d1d99e63112ef0f805e/*
[root@inception ~]# ls /dev/mapper/VolGroup0-4c2d89772b0c4d1d99e63112ef0f805e_0
ls: cannot access /dev/mapper/VolGroup0-4c2d89772b0c4d1d99e63112ef0f805e_0: No such file or directory
[root@inception ~]# df -h 
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_inception-lv_root
                       50G  6.4G   41G  14% /
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sdm1             485M   34M  426M   8% /boot
/dev/mapper/vg_inception-lv_home
                      394G  199M  374G   1% /home
/dev/mapper/VolGroup0-thin_vol0
                      100G   33M  100G   1% /brick0
/dev/mapper/VolGroup0-thin_vol1
                      100G   33M  100G   1% /brick1
/dev/mapper/VolGroup0-thin_vol2
                      100G   33M  100G   1% /brick2
/dev/mapper/VolGroup0-thin_vol3
                      100G   33M  100G   1% /brick3
/dev/mapper/VolGroup1-thin_vol4
                      100G   33M  100G   1% /brick4
/dev/mapper/VolGroup1-thin_vol5
                      100G   33M  100G   1% /brick5
/dev/mapper/VolGroup1-thin_vol6
                      100G   33M  100G   1% /brick6
/dev/mapper/VolGroup1-thin_vol7
                      100G   33M  100G   1% /brick7
[root@inception ~]#

Moving the bug to verified state

Comment 9 errata-xmlrpc 2014-09-22 19:37:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.