Bug 1337450
| Summary: | [Bitrot+Sharding] Scrub status shows incorrect values for 'files scrubbed' and 'files skipped' | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sweta Anandpara <sanandpa> | |
| Component: | bitrot | Assignee: | Kotresh HR <khiremat> | |
| Status: | CLOSED ERRATA | QA Contact: | Sweta Anandpara <sanandpa> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | rhgs-3.1 | CC: | amukherj, khiremat, rcyriac, rhinduja, rhs-bugs | |
| Target Milestone: | --- | |||
| Target Release: | RHGS 3.2.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.8.4-1 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1356851 (view as bug list) | Environment: | ||
| Last Closed: | 2017-03-23 05:31:45 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1351522, 1356851, 1357973, 1357975 | |||
|
Description
Sweta Anandpara
2016-05-19 09:06:44 UTC
Upstream Patches http://review.gluster.org/#/c/14927/ (master) http://review.gluster.org/#/c/14958/ (3.7) http://review.gluster.org/#/c/14959/ (3.8) (In reply to Kotresh HR from comment #4) > Upstream Patches > > http://review.gluster.org/#/c/14927/ (master) > http://review.gluster.org/#/c/14958/ (3.7) > http://review.gluster.org/#/c/14959/ (3.8) The fix is available in rhgs-3.2.0 as a rebase to GlusterFS 3.8.4 Tested and verified this on the build glusterfs-3.8.4-3 Had a 4node setup with bitrot and sharding enabled on a 2*2 volume, as well as an arbiter volume. Created files and observed the scrub status output. Did end up hitting bz 1378466, waited it out. Eventually the right number of files get updated in the field #scrubbedFiles and #skippedFiles Moving this bugzilla to verified in 3.2. Detailed logs are pasted below. [root@dhcp35-101 fd]# gluster peer status Number of Peers: 3 Hostname: 10.70.35.100 Uuid: fcfacf2e-57fb-45ba-b1e1-e4ba640a4de5 State: Peer in Cluster (Connected) Hostname: 10.70.35.104 Uuid: 10335359-1c70-42b2-bcce-6215a973678d State: Peer in Cluster (Connected) Hostname: dhcp35-115.lab.eng.blr.redhat.com Uuid: 6ac165c0-317f-42ad-8262-953995171dbb State: Peer in Cluster (Connected) [root@dhcp35-101 fd]# rpm -qa | grep gluster python-gluster-3.8.4-3.el6rhs.noarch glusterfs-rdma-3.8.4-3.el6rhs.x86_64 glusterfs-api-3.8.4-3.el6rhs.x86_64 glusterfs-server-3.8.4-3.el6rhs.x86_64 glusterfs-ganesha-3.8.4-3.el6rhs.x86_64 gluster-nagios-addons-0.2.8-1.el6rhs.x86_64 glusterfs-libs-3.8.4-3.el6rhs.x86_64 glusterfs-fuse-3.8.4-3.el6rhs.x86_64 glusterfs-geo-replication-3.8.4-3.el6rhs.x86_64 gluster-nagios-common-0.2.4-1.el6rhs.noarch vdsm-gluster-4.16.30-1.5.el6rhs.noarch glusterfs-3.8.4-3.el6rhs.x86_64 glusterfs-cli-3.8.4-3.el6rhs.x86_64 glusterfs-devel-3.8.4-3.el6rhs.x86_64 glusterfs-events-3.8.4-3.el6rhs.x86_64 glusterfs-client-xlators-3.8.4-3.el6rhs.x86_64 glusterfs-api-devel-3.8.4-3.el6rhs.x86_64 nfs-ganesha-gluster-2.3.1-8.el6rhs.x86_64 glusterfs-debuginfo-3.8.4-2.el6rhs.x86_64 [root@dhcp35-101 fd]# gluster v info Volume Name: nash Type: Distributed-Replicate Volume ID: d9c962de-5e4a-4fa9-a9c4-89b6803e543f Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.35.115:/bricks/brick1/nash0 Brick2: 10.70.35.100:/bricks/brick1/nash1 Brick3: 10.70.35.101:/bricks/brick1/nash2 Brick4: 10.70.35.104:/bricks/brick1/nash3 Options Reconfigured: features.shard: on features.scrub-freq: hourly features.scrub: Active features.bitrot: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on auto-delete: disable Volume Name: ozone Type: Distributed-Replicate Volume ID: 630022dd-1f6c-423e-bad6-22fb16f9fbcf Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: 10.70.35.115:/bricks/brick1/ozone Brick2: 10.70.35.100:/bricks/brick1/ozone Brick3: 10.70.35.101:/bricks/brick1/ozone (arbiter) Brick4: 10.70.35.115:/bricks/brick2/ozone4 Brick5: 10.70.35.100:/bricks/brick2/ozone5 Brick6: 10.70.35.101:/bricks/brick2/ozone6 (arbiter) Options Reconfigured: features.scrub-freq: hourly features.shard: on features.scrub: Active features.bitrot: on features.expiry-time: 20 nfs.disable: on performance.readdir-ahead: on transport.address-family: inet auto-delete: disable [root@dhcp35-101 fd]# [root@dhcp35-101 fd]# gluster v bitrot nash scrub status Volume name : nash State of scrub: Active (Idle) Scrub impact: lazy Scrub frequency: hourly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node: localhost Number of Scrubbed files: 4 Number of Skipped files: 0 Last completed scrub time: 2016-11-11 08:17:09 Duration of last scrub (D:M:H:M:S): 0:0:0:24 Error count: 0 ========================================================= Node: 10.70.35.100 Number of Scrubbed files: 1 Number of Skipped files: 0 Last completed scrub time: 2016-11-11 08:17:15 Duration of last scrub (D:M:H:M:S): 0:0:0:30 Error count: 0 ========================================================= Node: dhcp35-115.lab.eng.blr.redhat.com Number of Scrubbed files: 1 Number of Skipped files: 0 Last completed scrub time: 2016-11-11 08:17:15 Duration of last scrub (D:M:H:M:S): 0:0:0:30 Error count: 0 ========================================================= Node: 10.70.35.104 Number of Scrubbed files: 4 Number of Skipped files: 0 Last completed scrub time: 2016-11-11 08:17:09 Duration of last scrub (D:M:H:M:S): 0:0:0:23 Error count: 0 ========================================================= [root@dhcp35-101 fd]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |