Bug 1341034 - [quota+snapshot]: Directories are inaccessible from activated snapshot, when the snapshot was created during directory creation
Summary: [quota+snapshot]: Directories are inaccessible from activated snapshot, when ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Mohammed Rafi KC
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1311817 1341796 1342372 1342374 1342375
TreeView+ depends on / blocked
 
Reported: 2016-05-31 06:55 UTC by krishnaram Karthick
Modified: 2016-09-17 15:29 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.9-8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1341796 (view as bug list)
Environment:
Last Closed: 2016-06-23 05:24:57 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description krishnaram Karthick 2016-05-31 06:55:34 UTC
Description of problem:
From the snapshot taken during directory creation, the directories which were being created aren't accessible.
snapshots taken later without any IO ops seems to  have consistent data.

Volume Name: superman
Type: Tier
Volume ID: ba49611f-1cbc-4a25-a1a8-8a0eecfe6f76
Status: Started
Number of Bricks: 20
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 4 x 2 = 8
Brick1: 10.70.35.133:/bricks/brick7/reg-tier-3
Brick2: 10.70.35.10:/bricks/brick7/reg-tier-3
Brick3: 10.70.35.11:/bricks/brick7/reg-tier-3
Brick4: 10.70.35.225:/bricks/brick7/reg-tier-3
Brick5: 10.70.35.239:/bricks/brick7/reg-tier-3
Brick6: 10.70.37.60:/bricks/brick7/reg-tier-3
Brick7: 10.70.37.120:/bricks/brick7/reg-tier-3
Brick8: 10.70.37.101:/bricks/brick7/reg-tier-3
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (4 + 2) = 12
Brick9: 10.70.37.101:/bricks/brick0/l1
Brick10: 10.70.37.120:/bricks/brick0/l1
Brick11: 10.70.37.60:/bricks/brick0/l1
Brick12: 10.70.35.239:/bricks/brick0/l1
Brick13: 10.70.35.225:/bricks/brick0/l1
Brick14: 10.70.35.11:/bricks/brick0/l1
Brick15: 10.70.35.10:/bricks/brick0/l1
Brick16: 10.70.35.133:/bricks/brick0/l1
Brick17: 10.70.37.101:/bricks/brick1/l1
Brick18: 10.70.37.120:/bricks/brick1/l1
Brick19: 10.70.37.60:/bricks/brick1/l1
Brick20: 10.70.35.239:/bricks/brick1/l1
Options Reconfigured:
features.barrier: disable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
cluster.tier-mode: cache
features.ctr-enabled: on
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
nfs-ganesha: disable

'ls -l' from mountpoint where snapshot is activated

???????????   ? ?    ?           ?            ? dir-1
???????????   ? ?    ?           ?            ? dir-10
???????????   ? ?    ?           ?            ? dir-11
???????????   ? ?    ?           ?            ? dir-12
???????????   ? ?    ?           ?            ? dir-13
???????????   ? ?    ?           ?            ? dir-14
???????????   ? ?    ?           ?            ? dir-15
???????????   ? ?    ?           ?            ? dir-16
???????????   ? ?    ?           ?            ? dir-17

gluster  snapshot list
snapshot-superman-1_GMT-2016.05.31-04.54.11
snapshot-superman-2_GMT-2016.05.31-05.02.13
snapshot-superman-3_GMT-2016.05.31-05.08.25
snapshot-superman-4_GMT-2016.05.31-05.24.10

snapshot 'snapshot-superman-1_GMT-2016.05.31-04.54.11' was taken during directory creation. Rest of the snapshots were taken later without IOs.

Version-Release number of selected component (if applicable):
glusterfs-3.7.9-6.el7rhgs.x86_64

How reproducible:
1/1, yet to determine

Steps to Reproduce:
1. create a disperse volume 2 x (4+2)
2. start linux untar operation, mkdir -p dir-{1..1000}/sd-{1..100} from two different clients
3. attach a 4x2 hot tier
4. create a snapshot 
5. activate the snapshot and list directories

Actual results:
directories are inaccessible

Expected results:
directories should be accessible

Additional info:
sosreports shall be attached shortly.

Comment 2 krishnaram Karthick 2016-05-31 08:44:36 UTC
 - Tried reproducing this issue, couldn't reproduce
 - This is possible when fixlayout in hot tier is not complete and we try to take a snapshot, will have to confirm this theory

Comment 4 krishnaram Karthick 2016-06-01 05:58:14 UTC
snapshot-1 was activated and mounted on 10.70.47.161 on '/mnt/superman'

[root@dhcp47-161 ~]# mount
...
10.70.37.120:/snaps/snapshot-superman-1_GMT-2016.05.31-04.54.11/superman on /mnt/superman type fuse.glusterfs (ro,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
...

Comment 9 Mohammed Rafi KC 2016-06-01 19:00:36 UTC
upstream master patch : http://review.gluster.org/14608

Comment 10 Mohammed Rafi KC 2016-06-02 15:01:21 UTC
More details about the fix:

Since snapshot volumes are read-only volume, there is no need to enforce quota limits on snapshotted volumes. So even if quota is enabled on parent volume, we will disable quota for snapshot, when we create volfile. But restoring the snapshot volume will preserve quota options.

Comment 11 Mohammed Rafi KC 2016-06-03 06:51:54 UTC
upstream master patch  : http://review.gluster.org/14608
upstream 3.8 patch     : http://review.gluster.org/14628
upstream 3.7 patch     : http://review.gluster.org/14629
upstream 3.6 patch     : http://review.gluster.org/14630

downstream 3.1.3 patch : https://code.engineering.redhat.com/gerrit/5814/

Comment 12 rjoseph 2016-06-03 10:59:01 UTC
Correct downstream patch: https://code.engineering.redhat.com/gerrit/75814

Comment 14 Anil Shah 2016-06-07 09:50:19 UTC
Created distribute disperse tiered volumes and did fuse mount.
started creating files and directories from client.
While IO was happening, created snapshots.
mounted snapshot ,checked files and directories are accessible and consistent.
Restored the snapshot to check quota values are restored on not.

Bug verified on build  glusterfs-3.7.9-8.el7rhgs.x86_64

Comment 16 errata-xmlrpc 2016-06-23 05:24:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.