Bug 1547903

Summary: Stale entries of snapshots need to be removed from /var/run/gluster/snaps
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vinayak Papnoi <vpapnoi>
Component: snapshotAssignee: Sunny Kumar <sunkumar>
Status: CLOSED ERRATA QA Contact: Vinayak Papnoi <vpapnoi>
Severity: low Docs Contact:
Priority: medium    
Version: rhgs-3.4CC: amukherj, bkunal, nravinas, rhs-bugs, sheggodu, storage-qa-internal, sunkumar, vdas
Target Milestone: ---   
Target Release: RHGS 3.4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.12.2-14 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1597662 (view as bug list) Environment:
Last Closed: 2018-09-04 06:42:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1597662    
Bug Blocks: 1503137    

Description Vinayak Papnoi 2018-02-22 09:00:44 UTC
Description of problem:
=-=-=-=-=-=-=-=-=-=-=-=

When a snapshot is created, it creates the entry in /var/run/gluster/snaps/ but when the snapshot is deleted the entry is not removed.


[root@dhcp42-222 ~]# gluster v info
 
Volume Name: disperse
Type: Distributed-Disperse
Volume ID: ae9c0e11-bb59-45ce-a4ac-4030ea54c259
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.222:/bricks/brick0/disperse-b1
Brick2: 10.70.42.193:/bricks/brick0/disperse-b2
Brick3: 10.70.42.207:/bricks/brick0/disperse-b3
Brick4: 10.70.42.32:/bricks/brick0/disperse-b4
Brick5: 10.70.42.178:/bricks/brick0/disperse-b5
Brick6: 10.70.42.141:/bricks/brick0/disperse-b6
Brick7: 10.70.42.222:/bricks/brick1/disperse-b7
Brick8: 10.70.42.193:/bricks/brick1/disperse-b8
Brick9: 10.70.42.207:/bricks/brick1/disperse-b9
Brick10: 10.70.42.32:/bricks/brick1/disperse-b10
Brick11: 10.70.42.178:/bricks/brick1/disperse-b11
Brick12: 10.70.42.141:/bricks/brick1/disperse-b12
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
disperse.eager-lock: off
disperse.optimistic-change-log: off
disperse.parallel-writes: off
disperse.shd-max-threads: 64
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on
features.uss: enable
features.barrier: disable
cluster.brick-multiplex: enable
[root@dhcp42-222 ~]# gluster v status
Status of volume: disperse
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.42.222:/bricks/brick0/disperse-
b1                                          49152     0          Y       30293
Brick 10.70.42.193:/bricks/brick0/disperse-
b2                                          49152     0          Y       28575
Brick 10.70.42.207:/bricks/brick0/disperse-
b3                                          49152     0          Y       16336
Brick 10.70.42.32:/bricks/brick0/disperse-b
4                                           49152     0          Y       21967
Brick 10.70.42.178:/bricks/brick0/disperse-
b5                                          49152     0          Y       25767
Brick 10.70.42.141:/bricks/brick0/disperse-
b6                                          49152     0          Y       6653 
Brick 10.70.42.222:/bricks/brick1/disperse-
b7                                          49152     0          Y       30293
Brick 10.70.42.193:/bricks/brick1/disperse-
b8                                          49152     0          Y       28575
Brick 10.70.42.207:/bricks/brick1/disperse-
b9                                          49152     0          Y       16336
Brick 10.70.42.32:/bricks/brick1/disperse-b
10                                          49152     0          Y       21967
Brick 10.70.42.178:/bricks/brick1/disperse-
b11                                         49152     0          Y       25767
Brick 10.70.42.141:/bricks/brick1/disperse-
b12                                         49152     0          Y       6653 
Snapshot Daemon on localhost                49153     0          Y       11365
Self-heal Daemon on localhost               N/A       N/A        Y       14086
Quota Daemon on localhost                   N/A       N/A        Y       14095
Snapshot Daemon on 10.70.42.193             49153     0          Y       6266 
Self-heal Daemon on 10.70.42.193            N/A       N/A        Y       28566
Quota Daemon on 10.70.42.193                N/A       N/A        Y       6198 
Snapshot Daemon on 10.70.42.178             49153     0          Y       4445 
Self-heal Daemon on 10.70.42.178            N/A       N/A        Y       25758
Quota Daemon on 10.70.42.178                N/A       N/A        Y       4374 
Snapshot Daemon on 10.70.42.32              49153     0          Y       490  
Self-heal Daemon on 10.70.42.32             N/A       N/A        Y       21958
Quota Daemon on 10.70.42.32                 N/A       N/A        Y       413  
Snapshot Daemon on 10.70.42.141             49153     0          Y       17489
Self-heal Daemon on 10.70.42.141            N/A       N/A        Y       6644 
Quota Daemon on 10.70.42.141                N/A       N/A        Y       17403
Snapshot Daemon on 10.70.42.207             49153     0          Y       27743
Self-heal Daemon on 10.70.42.207            N/A       N/A        Y       16327
Quota Daemon on 10.70.42.207                N/A       N/A        Y       27675
 
Task Status of Volume disperse
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp42-222 ~]# gluster snap list
No snapshots present
[root@dhcp42-222 ~]# ls /var/run/gluster/snaps/ | wc
     12      12     221
[root@dhcp42-222 ~]# gluster snap create s1 disperse
snapshot create: success: Snap s1_GMT-2018.02.22-08.39.52 created successfully
[root@dhcp42-222 ~]# gluster snap list
s1_GMT-2018.02.22-08.39.52
[root@dhcp42-222 ~]# ls /var/run/gluster/snaps/ | wc
     13      13     248
[root@dhcp42-222 ~]# cd /var/run/gluster/snaps/s
s1_GMT-2018.02.22-08.39.52/    snap-disperse-1/               snap-disperse-3/               snap-disperse-6/               snap-disperse-9/               
snap1_GMT-2018.02.21-03.43.33/ snap-disperse-10/              snap-disperse-4/               snap-disperse-7/               
snap1_GMT-2018.02.21-03.45.11/ snap-disperse-2/               snap-disperse-5/               snap-disperse-8/               
[root@dhcp42-222 ~]# cd /var/run/gluster/snaps/s1_GMT-2018.02.22-08.39.52/
[root@dhcp42-222 s1_GMT-2018.02.22-08.39.52]# ls
638748025a6a433dbfee5e78343461e7
[root@dhcp42-222 s1_GMT-2018.02.22-08.39.52]# cd
[root@dhcp42-222 ~]# gluster snap delete s1_GMT-2018.02.22-08.39.52
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: s1_GMT-2018.02.22-08.39.52: snap removed successfully
[root@dhcp42-222 ~]# ls /var/run/gluster/snaps/ | wc
     13      13     248
[root@dhcp42-222 ~]# gluster snap list
No snapshots present
[root@dhcp42-222 ~]# ls /var/run/gluster/snaps/ | wc
     13      13     248
[root@dhcp42-222 ~]# 


As seen above, after deletion of snapshot the entry of that snapshot is not removed from /var/run/gluster/snaps/



Version-Release number of selected component (if applicable):
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

glusterfs-3.12.2-4.el7rhgs.x86_64


How reproducible:
=-=-=-=-=-=-=-=-=

Always


Steps to Reproduce:
=-=-=-=-=-=-=-=-=-=

1. Create a snapshot
2. Verify /var/run/gluster/snaps/ for the newly created snapshot entry
3. Delete the snapshot
4. Verify /var/run/gluster/snaps/ for the deletion of the newly created snapshot entry


Actual results:
=-=-=-=-=-=-=-=

Entries are not deleted from /var/run/gluster/snaps/ after snapshot delete.


Expected results:
=-=-=-=-=-=-=-=-=

Deleted snapshot entry must not be present under /var/run/gluster/snaps/

Comment 15 errata-xmlrpc 2018-09-04 06:42:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607