Bug 1589842 - [USS] snapview server does not go through the list of all the snapshots for validating a snap
Summary: [USS] snapview server does not go through the list of all the snapshots for v...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-11 13:49 UTC by Raghavendra Bhat
Modified: 2018-10-23 15:11 UTC (History)
1 user (show)

Fixed In Version: glusterfs-5.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-23 15:11:13 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2018-06-11 13:49:48 UTC
Description of problem:

Currently the snapview server from the snap daemon makes uses of gfapi to communicate with individual snapshots. Each snapshot is represented by a glfs instance. The inode context of each inode that snap daemon maintains, will have the information about which glfs instance (i.e. which snapshot) does the inode belongs to.

To compare whether a glfs instance is a valid glfs instance corresponding to one of the snapshots, snapview server compares it with its list of snapshots.

But that macro has a bug and does not go through the list of all the snapshots (or glfs instances corresponding to those snapshots), and just compares with the 1st glfs instance always.


#define SVS_CHECK_VALID_SNAPSHOT_HANDLE(fs, this)                       \
        do {                                                            \
                svs_private_t *_private = NULL;                         \
                _private = this->private;                               \
                int  i = 0;                                             \
                gf_boolean_t found = _gf_false;                         \
                LOCK (&_private->snaplist_lock);                        \
                {                                                       \
                        for (i = 0; i < _private->num_snaps; i++) {     \
                                if (_private->dirents->fs && fs &&      \
                                    _private->dirents->fs == fs) {      \
                                        found = _gf_true;               \
                                        break;                          \
                                }                                       \
			}                                               \
                }                                                       \
                UNLOCK (&_private->snaplist_lock);                      \
                                                                        \
                if (!found)                                             \
                        fs = NULL;                                      \
        } while (0)



It should be

"
   if (_private->dirents->fs[i] && fs &&      \
       _private->dirents->fs[i] == fs) {      \
"

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2018-06-11 14:20:24 UTC
REVIEW: https://review.gluster.org/20196 (features/snapview-server: properly go through the list of snapshots) posted (#2) for review on master by Raghavendra Bhat

Comment 2 Worker Ant 2018-06-14 15:49:31 UTC
COMMIT: https://review.gluster.org/20196 committed in master by "Amar Tumballi" <amarts> with a commit message- features/snapview-server: properly go through the list of snapshots

The comparison code to check whether a glfs instance is valid
(i.e. whether it corresponds to one in the list of current snapshots)
was not correct and was not comparing all the snapshots

Change-Id: I87c58edb47bd9ebbb91d805e45df2c4baf2c8118
fixes: bz#1589842
Signed-off-by: Raghavendra Bhat <raghavendra>

Comment 3 Shyamsundar 2018-10-23 15:11:13 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.