Bug 1004747 - afr: remote operation failed: Numeric result out of range observed in glustershd during self heal
Summary: afr: remote operation failed: Numeric result out of range observed in gluster...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate
Version: 2.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-05 11:58 UTC by Rahul Hinduja
Modified: 2016-09-17 12:16 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:19:07 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Rahul Hinduja 2013-09-05 11:58:34 UTC
Description of problem:
========================

While the shd was performing selfheal, the glustersd reported following 

[2013-09-05 18:34:59.509303] W [client-rpc-fops.c:1223:client3_3_removexattr_cbk] 0-vol-dr-client-1: remote operation failed: Numerical result out of range
[2013-09-05 18:34:59.509326] I [afr-self-heal-metadata.c:174:afr_sh_metadata_sync_cbk] 0-vol-dr-replicate-0: setting attributes failed for <gfid:eb5773b2-285c-4d19-ad2f-901ece43f95d> on vol-dr-client-1 (Numerical result out of range)



Version-Release number of selected component (if applicable):
=============================================================

glusterfs-server-3.4.0.30rhs-2.el6rhs.x86_64

Steps Carried:
==============

1. Create and start 6*2 volume from 4 server nodes.
2. Mount the volume on 3.4.0 client(Fuse and NFS)
3. Mount the volume on 3.3.0 client(Fuse and NFS)
4. Create directories and files from all the mount points.
5. While writes are in progress from mount, did an lazy umount (umount -l) of one of the brick in each replica pair.
6. Once the writes are completed, remount the bricks.
7. Since the brick process did not start after mount, restarted the glusterd on the server where the bricks were unmounted.
8. self heal daemon starts self healing.
9. While self heal was in progress started the replace brick of the brick which were earlier unmounted.
10. replace brick migration is successful, did a commit.
11. Observed following out of range messages on the shd logs.

Additional info:
================

Checked the arequal of all the files on source and replaced brick, it matches.

Comment 4 Vivek Agarwal 2015-12-03 17:19:07 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.