Description of problem: Currently the snapview server from the snap daemon makes uses of gfapi to communicate with individual snapshots. Each snapshot is represented by a glfs instance. The inode context of each inode that snap daemon maintains, will have the information about which glfs instance (i.e. which snapshot) does the inode belongs to. To compare whether a glfs instance is a valid glfs instance corresponding to one of the snapshots, snapview server compares it with its list of snapshots. But that macro has a bug and does not go through the list of all the snapshots (or glfs instances corresponding to those snapshots), and just compares with the 1st glfs instance always. #define SVS_CHECK_VALID_SNAPSHOT_HANDLE(fs, this) \ do { \ svs_private_t *_private = NULL; \ _private = this->private; \ int i = 0; \ gf_boolean_t found = _gf_false; \ LOCK (&_private->snaplist_lock); \ { \ for (i = 0; i < _private->num_snaps; i++) { \ if (_private->dirents->fs && fs && \ _private->dirents->fs == fs) { \ found = _gf_true; \ break; \ } \ } \ } \ UNLOCK (&_private->snaplist_lock); \ \ if (!found) \ fs = NULL; \ } while (0) It should be " if (_private->dirents->fs[i] && fs && \ _private->dirents->fs[i] == fs) { \ " Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/20196 (features/snapview-server: properly go through the list of snapshots) posted (#2) for review on master by Raghavendra Bhat
COMMIT: https://review.gluster.org/20196 committed in master by "Amar Tumballi" <amarts> with a commit message- features/snapview-server: properly go through the list of snapshots The comparison code to check whether a glfs instance is valid (i.e. whether it corresponds to one in the list of current snapshots) was not correct and was not comparing all the snapshots Change-Id: I87c58edb47bd9ebbb91d805e45df2c4baf2c8118 fixes: bz#1589842 Signed-off-by: Raghavendra Bhat <raghavendra>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/