Bug 1119628 - [SNAPSHOT] USS: The .snaps directory shows does not get refreshed immediately if snaps are taken when I/O is in progress
Summary: [SNAPSHOT] USS: The .snaps directory shows does not get refreshed immediately...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: unclassified
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Raghavendra Bhat
QA Contact:
URL:
Whiteboard: USS
Depends On: 1113923
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-15 07:41 UTC by Raghavendra Bhat
Modified: 2015-05-14 17:42 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1113923
Environment:
Last Closed: 2015-05-14 17:26:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2014-07-15 07:41:45 UTC
+++ This bug was initially created as a clone of Bug #1113923 +++

Description of problem:
After enabling the USS feature when snapshots were taken when I/O is in progress .snaps directory shows empty for couple of minutes before it shows 
the .snaps directory contents. 


How reproducible:
100%

Steps to Reproduce:
1. Create a dist-rep volume(2*2) and start the volume. Enable the uss feature
2. Mount the volume with fuse and start the I/O
3. While I/O is in progress take the snapshots 
4. Browse the .snaps contents(snapshots are not shown) immediately
5. After few minutes .snaps directory shows the snapshots and snapshot contents.

Actual results:
.snaps contents is not visible immediately. After some delay .snaps contents is shown.


Expected results:
.snaps should show the contents immediately.

Additional info:

[root@snapshot09 ~]# gluster volume set testvol features.uss enable
volume set: success
[root@snapshot09 ~]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: 065f0c54-47c6-4cb1-be5d-c47e5561764c
Status: Created
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: host1:/brick2/new_vol
Brick2: host2:/brick2/new_vol
Brick3: host3:/brick2/new_vol
Brick4: host4:/brick2/new_vol
Options Reconfigured:
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: host1:/brick1/testvol
Brick2: host2:/brick1/testvol
Brick3: host3:/brick1/testvol
Brick4: host4:/brick1/testvol
Options Reconfigured:
features.uss: enable
features.barrier: disable
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# ps -ef | grep snapd
root     12439     1  0 23:10 ?        00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/testvol -p /var/lib/glusterd/vols/testvol/run/testvol-snapd.pid -l /var/log/glusterfs/testvol-snapd.log --brick-name snapd-testvol -S /var/run/01a048c227f57cefd087f71a1d63acdd.socket --brick-port 49168 --xlator-option testvol-server.listen-port=49168
root     12517 12067  0 23:11 pts/1    00:00:00 grep snapd
[root@snapshot09 ~]# cd .snaps
-bash: cd: .snaps: No such file or directory
[root@snapshot09 ~]# mount
/dev/mapper/vg_snapshot09-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/vda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/mapper/VolGroup0-thin_vol0 on /brick0 type xfs (rw)
/dev/mapper/VolGroup0-thin_vol1 on /brick1 type xfs (rw)
/dev/mapper/VolGroup0-thin_vol2 on /brick2 type xfs (rw)
/dev/mapper/VolGroup0-thin_vol3 on /brick3 type xfs (rw)
/dev/mapper/VolGroup1-thin_vol4 on /brick4 type xfs (rw)
/dev/mapper/VolGroup1-thin_vol5 on /brick5 type xfs (rw)
/dev/mapper/VolGroup1-thin_vol6 on /brick6 type xfs (rw)
/dev/mapper/VolGroup1-thin_vol7 on /brick7 type xfs (rw)
host1:/newvol on /mnt/test3 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
host1:/testvol on /mnt/test1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[root@snapshot09 ~]# cp /root/testfile.sh /mnt/test1
[root@snapshot09 ~]# ls
anaconda-ks.cfg  install.log.syslog                                                                             testfile.sh
cleanup.sh       napview-client.c:163:svc_lookup_cbk] 0-testvol-snapview-client: Lookup on normal graph failed  vgcreate.sh
create.sh        readcheck.pl                                                                                   vgremove.sh
install.log      terfsd.c:1182:cleanup_and_exit] (--> 0-: received signum (15), shutting down
[root@snapshot09 ~]# cd /mnt/test1
[root@snapshot09 test1]# gluster volume info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: 065f0c54-47c6-4cb1-be5d-c47e5561764c
Status: Created
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: host1:/brick2/new_vol
Brick2: host2:/brick2/new_vol
Brick3: host3:/brick2/new_vol
Brick4: host4:/brick2/new_vol
Options Reconfigured:
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 723f20ad-99a2-4d85-8942-7cec20944676
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: host1:/brick1/testvol
Brick2: host2:/brick1/testvol
Brick3: host3:/brick1/testvol
Brick4: host4:/brick1/testvol
Options Reconfigured:
features.uss: enable
features.barrier: disable
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# gluster snapshot info
Snapshot                  : snap1
Snap UUID                 : 7a989d9d-e90b-445d-bd6f-90430ec6139c
Created                   : 2014-06-26 23:16:21
Snap Volumes:

	Snap Volume Name          : f7e5e0620df94576ac3fa61026261d0d
	Origin Volume name        : testvol
	Snaps taken for testvol      : 2
	Snaps available for testvol  : 254
	Status                    : Started
 
Snapshot                  : snap2
Snap UUID                 : 4578428c-c14c-4ee9-92b6-6491d57b5eb5
Created                   : 2014-06-26 23:17:51
Snap Volumes:

	Snap Volume Name          : 4f758b079d01441da4f71c4afe3ded6b
	Origin Volume name        : testvol
	Snaps taken for testvol      : 2
	Snaps available for testvol  : 254
	Status                    : Started
 
[root@snapshot09 ~]# 
[root@snapshot09 test1]# cd .snaps
[root@snapshot09 .snaps]# ls

After 5-10 mins the .snaps directory shows the snapshot directory contents.

[root@snapshot09 test1]# ^C
[root@snapshot09 test1]# cd .snaps
[root@snapshot09 .snaps]# ls
snap1  snap2
[root@snapshot09 .snaps]# cd snap1
[root@snapshot09 snap1]# ls
file.0     file.1464  file.1930  file.2393  file.302  file.769       testfile.1214  testfile.1682  testfile.2149  testfile.2616  testfile.533
file1      file.1465  file.1931  file.2394  file.303  file.77        testfile.1215  testfile.1683  testfile.215   testfile.2617  testfile.534
file.1     file.1466  file.1932  file.2395  file.304  file.770 

[root@snapshot09 snap2]# 
testfile.1216  testfile.1684  testfile.2150  testfile.2618  testfile.535
file.10    file.1467  file.1933 
file.1852  file.2732  file.3617  file.4503  file.5398  testfile.1398  testfile.2284  testfile.3170  testfile.4057  testfile.4944  testfile.971
file.1853  file.2733  file.3618  file.4504  file.54    testfile.1399  testfile.2285  testfile.3171  testfile.4058  testfile.4945  testfile.972
file.1854  file.2734  file.3619  file.4505  file.540   testfile.14    testfile.2286  testfile.3172  testfile.4059  testfile.4946  testfile.973
file.1855  file.2735  file.362   file.4506  file.5400  testfile.140   testfile.2287  testfile.3173  testfile.406   testfile.4947  testfile.974
file.1856  file.2736

--- Additional comment from Raghavendra Bhat on 2014-06-27 05:02:09 EDT ---

This is because, the snapd which gets started after enabling user serviceable snapshots feature refreshes its list of snapshots every 5 minutes. Thus if a snapshot is taken right after snapd refreshed its list, then one has to wait for 5 minutes for snapd to refresh its list again. Till then the newly taken snapshot is not visible in .snaps directory.

This is a known issue.

--- Additional comment from Anand Subramanian on 2014-06-27 08:00:30 EDT ---

As explained by Raghavendra, this is not a bug. Please close it. Thanks. Refresh interval for tech-preview is set to 5 minutes and is not configurable.

--- Additional comment from RHEL Product and Program Management on 2014-06-27 08:14:08 EDT ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

Comment 1 Anand Avati 2014-07-15 07:42:40 UTC
REVIEW: http://review.gluster.org/8150 (snapview-server: register a callback with glusterd to get notifications) posted (#5) for review on master by Raghavendra Bhat (raghavendra)

Comment 2 Anand Avati 2014-08-07 07:46:26 UTC
REVIEW: http://review.gluster.org/8150 (snapview-server: register a callback with glusterd to get notifications) posted (#6) for review on master by Raghavendra Bhat (raghavendra)

Comment 3 Anand Avati 2014-08-07 11:34:45 UTC
REVIEW: http://review.gluster.org/8150 (snapview-server: register a callback with glusterd to get notifications) posted (#7) for review on master by Raghavendra Bhat (raghavendra)

Comment 4 Anand Avati 2014-08-20 11:43:31 UTC
REVIEW: http://review.gluster.org/8150 (snapview-server: register a callback with glusterd to get notifications) posted (#8) for review on master by Raghavendra Bhat (raghavendra)

Comment 5 Anand Avati 2014-09-05 11:30:55 UTC
REVIEW: http://review.gluster.org/8150 (snapview-server: register a callback with glusterd to get notifications) posted (#9) for review on master by Raghavendra Bhat (raghavendra)

Comment 6 Anand Avati 2014-09-08 14:14:23 UTC
COMMIT: http://review.gluster.org/8150 committed in master by Vijay Bellur (vbellur) 
------
commit 822cf315a5d0f0d2bc90e9f2d8faa6e5e5701ed4
Author: Raghavendra Bhat <raghavendra>
Date:   Tue Jun 17 00:28:01 2014 +0530

    snapview-server: register a callback with glusterd to get notifications
    
    * As of now snapview-server is polling (sending rpc requests to glusterd) to
      get the latest list of snapshots at some regular time intervals
      (non configurable). Instead of that register a callback with glusterd so that
      glusterd sends notifications to snapd whenever a snapshot is created/deleted
      and snapview-server can configure itself.
    
    Change-Id: I17a274fd2ab487d030678f0077feb2b0f35e5896
    BUG: 1119628
    Signed-off-by: Raghavendra Bhat <raghavendra>
    Reviewed-on: http://review.gluster.org/8150
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 7 Niels de Vos 2015-05-14 17:26:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:35:28 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 9 Niels de Vos 2015-05-14 17:37:50 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:42:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.