Bug 1332133 - glusterd + bitrot : unable to create clone of snapshot. error "xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file:
Summary: glusterd + bitrot : unable to create clone of snapshot. error "xlator.c:148:x...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: bitrot
Version: rhgs-3.1
Hardware: All
OS: All
unspecified
urgent
Target Milestone: ---
: RHGS 3.2.0
Assignee: Kotresh HR
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1332465 1332776 1332864 1351522
TreeView+ depends on / blocked
 
Reported: 2016-05-02 10:39 UTC by Anil Shah
Modified: 2017-03-23 05:29 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1332465 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:29:07 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Anil Shah 2016-05-02 10:39:07 UTC
Description of problem:

After enabling bitrot on volume, Unable to create clone of snapshot. 
Getting Error " xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file" 

Version-Release number of selected component (if applicable):

glusterfs-3.7.9-3.el7rhgs.x86_64


How reproducible:

100

Steps to Reproduce:
1. Upgraded system to  glusterfs-3.7.9-3
2. Create 2*2 distribute replicate volume
3. Enable quota, set limit-usage and bitrot
4. Crate snapshot, activate it
5. Create clone of snapshot
 
Actual results:

clone failed 

Expected results:

Snapshot clone should not fail.

Additional info:

Error log for glusterd
==================================

E [MSGID: 106122] [glusterd-mgmt.c:2344:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Post Validation Failed
The message "W [MSGID: 101095] [xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file:

Comment 2 Anil Shah 2016-05-03 09:10:59 UTC
Removing the blocker flag since able to create clone on fresh setup.

Comment 3 Kotresh HR 2016-05-03 09:38:14 UTC
Upstream Patch (master):
http://review.gluster.org/#/c/14183/1

Comment 6 Kotresh HR 2016-08-16 06:57:45 UTC
Upstream Patches:

http://review.gluster.org/#/c/14183/  (master)
http://review.gluster.org/#/c/14193/  (3.7)

Available in 3.8 as part of branch out.

Comment 7 Atin Mukherjee 2016-09-17 14:50:10 UTC
Available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4

Comment 10 Sweta Anandpara 2016-11-07 09:28:32 UTC
Tested and verified this on the build glusterfs-3.8.4-3.el7rhgs.x86_64

Created snapshot, activated and created clone of the same on different volume types (replica2, replica3, disperse), all of which had bitrot enabled in them. Was able to successfully create clones, nor were there any error messages seen in the glusterd logs. 

Kotresh, please do write if there is any specific scenario that has to be simulated and tested. If not, we can move this bz to verified. 

[root@dhcp46-239 ~]# rpm -qa  | grep glusterfs
glusterfs-api-3.8.4-3.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-3.el7rhgs.x86_64
glusterfs-libs-3.8.4-3.el7rhgs.x86_64
glusterfs-debuginfo-3.8.4-1.el7rhgs.x86_64
glusterfs-3.8.4-3.el7rhgs.x86_64
glusterfs-cli-3.8.4-3.el7rhgs.x86_64
glusterfs-events-3.8.4-3.el7rhgs.x86_64
glusterfs-rdma-3.8.4-3.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-3.el7rhgs.x86_64
glusterfs-server-3.8.4-3.el7rhgs.x86_64
glusterfs-api-devel-3.8.4-3.el7rhgs.x86_64
glusterfs-devel-3.8.4-3.el7rhgs.x86_64
glusterfs-fuse-3.8.4-3.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-3.el7rhgs.x86_64
[root@dhcp46-239 ~]# gluster v list
clone1
nash
nash_snap_clone
nash_snap_clone2
ozone
repthree
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster snapshot create ozone_snap ozone
gluster snapshot activate snapshot create: success: Snap ozone_snap_GMT-2016.11.07-09.21.37 created successfully
[root@dhcp46-239 ~]# gluster snapshot activate 
Usage: snapshot activate <snapname> [force]
[root@dhcp46-239 ~]# gluster snapshot activate ozone_snap_GMT-2016.11.07-09.21.37
Snapshot activate: ozone_snap_GMT-2016.11.07-09.21.37: Snap activated successfully
[root@dhcp46-239 ~]# gluster snapshot clone ozone_snapclone ozone_snap_GMT-2016.11.07-09.21.37
snapshot clone: success: Clone ozone_snapclone created successfully
[root@dhcp46-239 ~]# gluster snapshot create repthree_snap repthree
snapshot create: success: Snap repthree_snap_GMT-2016.11.07-09.23.14 created successfully
[root@dhcp46-239 ~]# gluster snapshot activate repthree_snap_GMT-2016.11.07-09.23.14
glusSnapshot activate: repthree_snap_GMT-2016.11.07-09.23.14: Snap activated successfully
[root@dhcp46-239 ~]# gluster snapshot clone repthree_snapclone repthree_snap_GMT-2016.11.07-09.23.14:
snapshot clone: failed: Failed to find :repthree_snap_GMT-2016.11.07-09.23.14: snap
Snapshot command failed
[root@dhcp46-239 ~]# gluster snapshot clone repthree_snapclone repthree_snap_GMT-2016.11.07-09.23.14
snapshot clone: success: Clone repthree_snapclone created successfully
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster peer status
Number of Peers: 3

Hostname: 10.70.46.240
Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176
State: Peer in Cluster (Connected)

Hostname: 10.70.46.242
Uuid: 1e8967ae-51b2-4c27-907e-a22a83107fd0
State: Peer in Cluster (Connected)

Hostname: 10.70.46.218
Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa
State: Peer in Cluster (Connected)
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster v info
 
Volume Name: clone1
Type: Distributed-Replicate
Volume ID: 0869197f-a6ad-484b-acf3-c698cb1148e5
Status: Created
Snapshot Count: 0
Number of Bricks: 14 x 2 = 28
Transport-type: tcp
Bricks:
Brick1: 10.70.46.239:/run/gluster/snaps/clone1/brick1/nash0
Brick2: 10.70.46.240:/run/gluster/snaps/clone1/brick2/nash1
Brick3: 10.70.46.242:/run/gluster/snaps/clone1/brick3/nash2
Brick4: 10.70.46.218:/run/gluster/snaps/clone1/brick4/nash3
Brick5: 10.70.46.239:/run/gluster/snaps/clone1/brick5/nash4
Brick6: 10.70.46.240:/run/gluster/snaps/clone1/brick6/nash5
Brick7: 10.70.46.242:/run/gluster/snaps/clone1/brick7/nash6
Brick8: 10.70.46.218:/run/gluster/snaps/clone1/brick8/nash7
Brick9: 10.70.46.239:/run/gluster/snaps/clone1/brick9/nash8
Brick10: 10.70.46.240:/run/gluster/snaps/clone1/brick10/nash9
Brick11: 10.70.46.242:/run/gluster/snaps/clone1/brick11/nash10
Brick12: 10.70.46.218:/run/gluster/snaps/clone1/brick12/nash11
Brick13: 10.70.46.239:/run/gluster/snaps/clone1/brick13/nash12
Brick14: 10.70.46.240:/run/gluster/snaps/clone1/brick14/nash13
Brick15: 10.70.46.242:/run/gluster/snaps/clone1/brick15/nash14
Brick16: 10.70.46.218:/run/gluster/snaps/clone1/brick16/nash15
Brick17: 10.70.46.239:/run/gluster/snaps/clone1/brick17/nash16
Brick18: 10.70.46.240:/run/gluster/snaps/clone1/brick18/nash17
Brick19: 10.70.46.242:/run/gluster/snaps/clone1/brick19/nash18
Brick20: 10.70.46.218:/run/gluster/snaps/clone1/brick20/nash19
Brick21: 10.70.46.239:/run/gluster/snaps/clone1/brick21/nash20
Brick22: 10.70.46.240:/run/gluster/snaps/clone1/brick22/nash21
Brick23: 10.70.46.242:/run/gluster/snaps/clone1/brick23/nash22
Brick24: 10.70.46.218:/run/gluster/snaps/clone1/brick24/nash23
Brick25: 10.70.46.239:/run/gluster/snaps/clone1/brick25/nash24
Brick26: 10.70.46.240:/run/gluster/snaps/clone1/brick26/nash25
Brick27: 10.70.46.242:/run/gluster/snaps/clone1/brick27/nash26
Brick28: 10.70.46.218:/run/gluster/snaps/clone1/brick28/nash27
Options Reconfigured:
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
cluster.enable-shared-storage: disable
 
Volume Name: nash
Type: Distributed-Replicate
Volume ID: f0af0a7f-eb15-4da3-bafb-7d1c859abf06
Status: Started
Snapshot Count: 2
Number of Bricks: 14 x 2 = 28
Transport-type: tcp
Bricks:
Brick1: 10.70.46.239:/bricks/brick0/nash0
Brick2: 10.70.46.240:/bricks/brick0/nash1
Brick3: 10.70.46.242:/bricks/brick0/nash2
Brick4: 10.70.46.218:/bricks/brick0/nash3
Brick5: 10.70.46.239:/bricks/brick1/nash4
Brick6: 10.70.46.240:/bricks/brick1/nash5
Brick7: 10.70.46.242:/bricks/brick1/nash6
Brick8: 10.70.46.218:/bricks/brick1/nash7
Brick9: 10.70.46.239:/bricks/brick2/nash8
Brick10: 10.70.46.240:/bricks/brick2/nash9
Brick11: 10.70.46.242:/bricks/brick2/nash10
Brick12: 10.70.46.218:/bricks/brick2/nash11
Brick13: 10.70.46.239:/bricks/brick3/nash12
Brick14: 10.70.46.240:/bricks/brick3/nash13
Brick15: 10.70.46.242:/bricks/brick3/nash14
Brick16: 10.70.46.218:/bricks/brick3/nash15
Brick17: 10.70.46.239:/bricks/brick4/nash16
Brick18: 10.70.46.240:/bricks/brick4/nash17
Brick19: 10.70.46.242:/bricks/brick4/nash18
Brick20: 10.70.46.218:/bricks/brick4/nash19
Brick21: 10.70.46.239:/bricks/brick5/nash20
Brick22: 10.70.46.240:/bricks/brick5/nash21
Brick23: 10.70.46.242:/bricks/brick5/nash22
Brick24: 10.70.46.218:/bricks/brick5/nash23
Brick25: 10.70.46.239:/bricks/brick6/nash24
Brick26: 10.70.46.240:/bricks/brick6/nash25
Brick27: 10.70.46.242:/bricks/brick6/nash26
Brick28: 10.70.46.218:/bricks/brick6/nash27
Options Reconfigured:
features.barrier: disable
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
cluster.enable-shared-storage: disable
 
Volume Name: nash_snap_clone
Type: Distributed-Replicate
Volume ID: b02a1fd5-970d-4437-afb4-d01b8cf4f7ec
Status: Created
Snapshot Count: 0
Number of Bricks: 14 x 2 = 28
Transport-type: tcp
Bricks:
Brick1: 10.70.46.239:/run/gluster/snaps/nash_snap_clone/brick1/nash0
Brick2: 10.70.46.240:/run/gluster/snaps/nash_snap_clone/brick2/nash1
Brick3: 10.70.46.242:/run/gluster/snaps/nash_snap_clone/brick3/nash2
Brick4: 10.70.46.218:/run/gluster/snaps/nash_snap_clone/brick4/nash3
Brick5: 10.70.46.239:/run/gluster/snaps/nash_snap_clone/brick5/nash4
Brick6: 10.70.46.240:/run/gluster/snaps/nash_snap_clone/brick6/nash5
Brick7: 10.70.46.242:/run/gluster/snaps/nash_snap_clone/brick7/nash6
Brick8: 10.70.46.218:/run/gluster/snaps/nash_snap_clone/brick8/nash7
Brick9: 10.70.46.239:/run/gluster/snaps/nash_snap_clone/brick9/nash8
Brick10: 10.70.46.240:/run/gluster/snaps/nash_snap_clone/brick10/nash9
Brick11: 10.70.46.242:/run/gluster/snaps/nash_snap_clone/brick11/nash10
Brick12: 10.70.46.218:/run/gluster/snaps/nash_snap_clone/brick12/nash11
Brick13: 10.70.46.239:/run/gluster/snaps/nash_snap_clone/brick13/nash12
Brick14: 10.70.46.240:/run/gluster/snaps/nash_snap_clone/brick14/nash13
Brick15: 10.70.46.242:/run/gluster/snaps/nash_snap_clone/brick15/nash14
Brick16: 10.70.46.218:/run/gluster/snaps/nash_snap_clone/brick16/nash15
Brick17: 10.70.46.239:/run/gluster/snaps/nash_snap_clone/brick17/nash16
Brick18: 10.70.46.240:/run/gluster/snaps/nash_snap_clone/brick18/nash17
Brick19: 10.70.46.242:/run/gluster/snaps/nash_snap_clone/brick19/nash18
Brick20: 10.70.46.218:/run/gluster/snaps/nash_snap_clone/brick20/nash19
Brick21: 10.70.46.239:/run/gluster/snaps/nash_snap_clone/brick21/nash20
Brick22: 10.70.46.240:/run/gluster/snaps/nash_snap_clone/brick22/nash21
Brick23: 10.70.46.242:/run/gluster/snaps/nash_snap_clone/brick23/nash22
Brick24: 10.70.46.218:/run/gluster/snaps/nash_snap_clone/brick24/nash23
Brick25: 10.70.46.239:/run/gluster/snaps/nash_snap_clone/brick25/nash24
Brick26: 10.70.46.240:/run/gluster/snaps/nash_snap_clone/brick26/nash25
Brick27: 10.70.46.242:/run/gluster/snaps/nash_snap_clone/brick27/nash26
Brick28: 10.70.46.218:/run/gluster/snaps/nash_snap_clone/brick28/nash27
Options Reconfigured:
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
cluster.enable-shared-storage: disable
 
Volume Name: nash_snap_clone2
Type: Distributed-Replicate
Volume ID: 880187c0-3378-4b24-aa7e-1045d5005722
Status: Created
Snapshot Count: 0
Number of Bricks: 14 x 2 = 28
Transport-type: tcp
Bricks:
Brick1: 10.70.46.239:/run/gluster/snaps/nash_snap_clone2/brick1/nash0
Brick2: 10.70.46.240:/run/gluster/snaps/nash_snap_clone2/brick2/nash1
Brick3: 10.70.46.242:/run/gluster/snaps/nash_snap_clone2/brick3/nash2
Brick4: 10.70.46.218:/run/gluster/snaps/nash_snap_clone2/brick4/nash3
Brick5: 10.70.46.239:/run/gluster/snaps/nash_snap_clone2/brick5/nash4
Brick6: 10.70.46.240:/run/gluster/snaps/nash_snap_clone2/brick6/nash5
Brick7: 10.70.46.242:/run/gluster/snaps/nash_snap_clone2/brick7/nash6
Brick8: 10.70.46.218:/run/gluster/snaps/nash_snap_clone2/brick8/nash7
Brick9: 10.70.46.239:/run/gluster/snaps/nash_snap_clone2/brick9/nash8
Brick10: 10.70.46.240:/run/gluster/snaps/nash_snap_clone2/brick10/nash9
Brick11: 10.70.46.242:/run/gluster/snaps/nash_snap_clone2/brick11/nash10
Brick12: 10.70.46.218:/run/gluster/snaps/nash_snap_clone2/brick12/nash11
Brick13: 10.70.46.239:/run/gluster/snaps/nash_snap_clone2/brick13/nash12
Brick14: 10.70.46.240:/run/gluster/snaps/nash_snap_clone2/brick14/nash13
Brick15: 10.70.46.242:/run/gluster/snaps/nash_snap_clone2/brick15/nash14
Brick16: 10.70.46.218:/run/gluster/snaps/nash_snap_clone2/brick16/nash15
Brick17: 10.70.46.239:/run/gluster/snaps/nash_snap_clone2/brick17/nash16
Brick18: 10.70.46.240:/run/gluster/snaps/nash_snap_clone2/brick18/nash17
Brick19: 10.70.46.242:/run/gluster/snaps/nash_snap_clone2/brick19/nash18
Brick20: 10.70.46.218:/run/gluster/snaps/nash_snap_clone2/brick20/nash19
Brick21: 10.70.46.239:/run/gluster/snaps/nash_snap_clone2/brick21/nash20
Brick22: 10.70.46.240:/run/gluster/snaps/nash_snap_clone2/brick22/nash21
Brick23: 10.70.46.242:/run/gluster/snaps/nash_snap_clone2/brick23/nash22
Brick24: 10.70.46.218:/run/gluster/snaps/nash_snap_clone2/brick24/nash23
Brick25: 10.70.46.239:/run/gluster/snaps/nash_snap_clone2/brick25/nash24
Brick26: 10.70.46.240:/run/gluster/snaps/nash_snap_clone2/brick26/nash25
Brick27: 10.70.46.242:/run/gluster/snaps/nash_snap_clone2/brick27/nash26
Brick28: 10.70.46.218:/run/gluster/snaps/nash_snap_clone2/brick28/nash27
Options Reconfigured:
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
cluster.enable-shared-storage: disable
 
Volume Name: ozone
Type: Disperse
Volume ID: 43e6a2ed-4d6f-4ada-8c37-1474970c3dd9
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.46.240:/bricks/brick0/ozone0
Brick2: 10.70.46.242:/bricks/brick0/ozone1
Brick3: 10.70.46.218:/bricks/brick0/ozone2
Brick4: 10.70.46.240:/bricks/brick1/ozone3
Brick5: 10.70.46.242:/bricks/brick1/ozone4
Brick6: 10.70.46.218:/bricks/brick1/ozone5
Options Reconfigured:
features.barrier: disable
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
cluster.enable-shared-storage: disable
 
Volume Name: ozone_snapclone
Type: Disperse
Volume ID: 555551af-8b70-4530-81a9-e839a4d277df
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.46.240:/run/gluster/snaps/ozone_snapclone/brick1/ozone0
Brick2: 10.70.46.242:/run/gluster/snaps/ozone_snapclone/brick2/ozone1
Brick3: 10.70.46.218:/run/gluster/snaps/ozone_snapclone/brick3/ozone2
Brick4: 10.70.46.240:/run/gluster/snaps/ozone_snapclone/brick4/ozone3
Brick5: 10.70.46.242:/run/gluster/snaps/ozone_snapclone/brick5/ozone4
Brick6: 10.70.46.218:/run/gluster/snaps/ozone_snapclone/brick6/ozone5
Options Reconfigured:
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
cluster.enable-shared-storage: disable
 
Volume Name: repthree
Type: Replicate
Volume ID: aa8f3095-5a69-4d0a-80d9-6182c3de3cb4
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.239:/bricks/brick0/repthree1
Brick2: 10.70.46.240:/bricks/brick0/repthree2
Brick3: 10.70.46.242:/bricks/brick0/repthree3
Options Reconfigured:
features.barrier: disable
diagnostics.client-log-level: INFO
performance.readdir-ahead: on
transport.address-family: inet
features.bitrot: on
features.scrub: Active
features.scrub-freq: weekly
performance.stat-prefetch: off
cluster.enable-shared-storage: disable
 
Volume Name: repthree_snapclone
Type: Replicate
Volume ID: 649fa67e-a83a-4b11-8c46-1adc117ef92b
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.239:/run/gluster/snaps/repthree_snapclone/brick1/repthree1
Brick2: 10.70.46.240:/run/gluster/snaps/repthree_snapclone/brick2/repthree2
Brick3: 10.70.46.242:/run/gluster/snaps/repthree_snapclone/brick3/repthree3
Options Reconfigured:
diagnostics.client-log-level: INFO
performance.readdir-ahead: on
transport.address-family: inet
features.bitrot: on
features.scrub: Active
features.scrub-freq: weekly
performance.stat-prefetch: off
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
cluster.enable-shared-storage: disable
[root@dhcp46-239 ~]#

Comment 11 Sweta Anandpara 2016-11-09 12:32:37 UTC
Was able to get a consistent reproducer for the same, thanks to Kotresh. 

Steps to reproduce:
-------------------
1. Have a distribute-replicate volume, with/without bitrot enabled.
2. Monitor the glusterd logs at /var/log/glusterfs/ location, say at node N1
3. From node N1, execute the command 'gluster volume set <volname> features.expiry-time 20'
//features.expiry-time is one of the bitrot related options which caters to the time that it takes for the signing of a file to take place.

As soon as step3 is executed, the glusterd log shows up a warning message :
[2016-11-09 12:14:45.387503] W [MSGID: 101095] [xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file: No such file or directory

Reproduced the issue on a 3.1.3 and followed the same steps on the build 3.8.4-3 . Caught the error message in glusterd logs in 3.1.3 and not in 3.8.4-3

Moving the BZ to verified. Detailed logs are pasted below.

3.1.3 setup
==================

[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# gluster v get testvol all | grep expiry
features.expiry-time                    10                                      
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# gluster v set testvol features.expiry-time 20
volume set: success
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# rpm -qa | grep gluster
gluster-nagios-addons-0.2.7-1.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
vdsm-gluster-4.17.33-1.el7rhgs.noarch
glusterfs-client-xlators-3.7.9-12.el7rhgs.x86_64
glusterfs-libs-3.7.9-12.el7rhgs.x86_64
glusterfs-api-3.7.9-12.el7rhgs.x86_64
glusterfs-cli-3.7.9-12.el7rhgs.x86_64
glusterfs-geo-replication-3.7.9-12.el7rhgs.x86_64
glusterfs-fuse-3.7.9-12.el7rhgs.x86_64
glusterfs-3.7.9-12.el7rhgs.x86_64
glusterfs-server-3.7.9-12.el7rhgs.x86_64
python-gluster-3.7.9-12.el7rhgs.noarch
[root@dhcp47-60 ~]# gluster peer status
Number of Peers: 3

Hostname: 10.70.47.61
Uuid: f4b259db-7add-4d01-bb5e-3c7f9c077bb4
State: Peer in Cluster (Connected)

Hostname: 10.70.47.26
Uuid: 95c24075-02aa-49c1-a1e4-c7e0775e7128
State: Peer in Cluster (Connected)

Hostname: 10.70.47.27
Uuid: 8d1aaf3a-059e-41c2-871b-6c7f5c0dd90b
State: Peer in Cluster (Connected)
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# gluster v info
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: c60c4615-1b96-45ac-ae44-a858b70c5592
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.60:/bricks/brick0/testvol_brick0
Brick2: 10.70.47.61:/bricks/brick0/testvol_brick1
Brick3: 10.70.47.26:/bricks/brick0/testvol_brick2
Brick4: 10.70.47.27:/bricks/brick0/testvol_brick3
Options Reconfigured:
features.expiry-time: 20
performance.readdir-ahead: on
features.bitrot: off
features.scrub: Inactive
features.scrub-throttle: aggressive
features.scrub-freq: hourly
[root@dhcp47-60 ~]# 




[2016-11-09 12:14:45.387503] W [MSGID: 101095] [xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file: No such file or directory
[2016-11-09 12:14:45.585636] W [dict.c:1452:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.7.9/xlator/mgmt/glusterd.so(build_shd_graph+0x69) [0x7f35605b7829] -->/lib64/libglusterfs.so.0(dict_get_str_boolean+0x32) [0x7f356b9fe1c2] -->/lib64/libglusterfs.so.0(+0x1ed86) [0x7f356b9fbd86] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument]
[2016-11-09 12:14:45.587476] W [dict.c:1452:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.7.9/xlator/mgmt/glusterd.so(build_shd_graph+0x69) [0x7f35605b7829] -->/lib64/libglusterfs.so.0(dict_get_str_boolean+0x32) [0x7f356b9fe1c2] -->/lib64/libglusterfs.so.0(+0x1ed86) [0x7f356b9fbd86] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument]
The message "W [MSGID: 101095] [xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file: No such file or directory" repeated 4 times between [2016-11-09 12:14:45.387503] and [2016-11-09 12:14:45.387857]
[2016-11-09 12:14:45.593883] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped
[2016-11-09 12:14:45.594652] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped



3.8.4-3 setup
=============================

[root@dhcp35-115 ~]# gluster v info
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 630022dd-1f6c-423e-bad6-22fb16f9fbcf
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.35.115:/bricks/brick1/ozone
Brick2: 10.70.35.100:/bricks/brick1/ozone
Brick3: 10.70.35.101:/bricks/brick1/ozone (arbiter)
Brick4: 10.70.35.115:/bricks/brick2/ozone4
Brick5: 10.70.35.100:/bricks/brick2/ozone5
Brick6: 10.70.35.101:/bricks/brick2/ozone6 (arbiter)
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
auto-delete: disable
[root@dhcp35-115 ~]# 
[root@dhcp35-115 ~]# 
[root@dhcp35-115 ~]# gluster v get ozone all | grep expiry
features.expiry-time                    120                                     
[root@dhcp35-115 ~]# 
[root@dhcp35-115 ~]# 
[root@dhcp35-115 ~]# gluster v set ozone features.expiry-time 20
volume set: success
[root@dhcp35-115 ~]# rpm -qa | grep gluster
glusterfs-client-xlators-3.8.4-3.el6rhs.x86_64
glusterfs-api-devel-3.8.4-3.el6rhs.x86_64
glusterfs-cli-3.8.4-3.el6rhs.x86_64
glusterfs-devel-3.8.4-3.el6rhs.x86_64
gluster-nagios-addons-0.2.8-1.el6rhs.x86_64
glusterfs-libs-3.8.4-3.el6rhs.x86_64
glusterfs-fuse-3.8.4-3.el6rhs.x86_64
glusterfs-geo-replication-3.8.4-3.el6rhs.x86_64
nfs-ganesha-gluster-2.3.1-8.el6rhs.x86_64
glusterfs-debuginfo-3.8.4-2.el6rhs.x86_64
glusterfs-api-3.8.4-3.el6rhs.x86_64
glusterfs-server-3.8.4-3.el6rhs.x86_64
glusterfs-ganesha-3.8.4-3.el6rhs.x86_64
gluster-nagios-common-0.2.4-1.el6rhs.noarch
vdsm-gluster-4.16.30-1.5.el6rhs.noarch
python-gluster-3.8.4-3.el6rhs.noarch
glusterfs-rdma-3.8.4-3.el6rhs.x86_64
glusterfs-3.8.4-3.el6rhs.x86_64
glusterfs-events-3.8.4-3.el6rhs.x86_64
[root@dhcp35-115 ~]# 
[root@dhcp35-115 ~]# 
[root@dhcp35-115 ~]# gluster peer status
Number of Peers: 3

Hostname: dhcp35-101.lab.eng.blr.redhat.com
Uuid: a3bd23b9-f70a-47f5-9c95-7a271f5f1e18
State: Peer in Cluster (Connected)

Hostname: 10.70.35.104
Uuid: 10335359-1c70-42b2-bcce-6215a973678d
State: Peer in Cluster (Connected)

Hostname: 10.70.35.100
Uuid: fcfacf2e-57fb-45ba-b1e1-e4ba640a4de5
State: Peer in Cluster (Connected)
[root@dhcp35-115 ~]# 




[2016-11-09 12:18:19.285222] W [dict.c:1410:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x8ce06) [0x7fe3f9b8ce06] -->/usr/lib64/libglusterfs.so.0(dict_get_str_boolean+0x22) [0x7fe40545aef2] -->/usr/lib64/libglusterfs.so.0(+0x21e2e) [0x7fe405459e2e] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument]
[2016-11-09 12:18:19.286055] W [dict.c:1410:dict_get_with_ref] (-->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x8ce06) [0x7fe3f9b8ce06] -->/usr/lib64/libglusterfs.so.0(dict_get_str_boolean+0x22) [0x7fe40545aef2] -->/usr/lib64/libglusterfs.so.0(+0x21e2e) [0x7fe405459e2e] ) 0-dict: dict OR key (graph-check) is NULL [Invalid argument]
[2016-11-09 12:18:19.289213] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped
[2016-11-09 12:18:19.289257] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: quotad service is stopped
[2016-11-09 12:18:19.289320] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped
[2016-11-09 12:18:19.289349] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: bitd service is stopped
[2016-11-09 12:18:19.289410] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped
[2016-11-09 12:18:19.289438] I [MSGID: 106568] [glusterd-svc-mgmt.c:228:glusterd_svc_stop] 0-management: scrub service is stopped
[2016-11-09 12:18:19.295639] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0xcff62) [0x7fe3f9bcff62] -->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0xcfc4c) [0x7fe3f9bcfc4c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) [0x7fe4054b0a9e] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=ozone -o features.expiry-time=20 --gd-workdir=/var/lib/glusterd

Comment 13 errata-xmlrpc 2017-03-23 05:29:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.