Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2012-0897.html
Description of problem: I saw this error while running I/O from multiple rhel5 nfs clients, and while relocation this ha nfs service. <rm> <failoverdomains> <failoverdomain name="GRANT_domain" ordered="0" restricted="0"> <failoverdomainnode name="grant-01" priority="1"/> <failoverdomainnode name="grant-02" priority="1"/> <failoverdomainnode name="grant-03" priority="1"/> </failoverdomain> </failoverdomains> <resources> <ip address="10.15.89.208" monitor_link="1"/> <fs device="/dev/GRANT/GRANT0" force_fsck="0" force_unmount="1" self_fence="1" fsid="6427" fstype="ext3" mountpoint="/mnt/grant1" name="GRANT0" options=""/> <nfsserver name="GRANT nfs server"/> <nfsclient name="*" options="rw" target="*"/> </resources> <service autostart="1" domain="GRANT_domain" name="nfs1"> <fs ref="GRANT0"> <nfsserver ref="GRANT nfs server"> <nfsclient ref="*"/> </nfsserver> </fs> <ip ref="10.15.89.208"/> </service> </rm> [root@grant-03 ~]# clustat Cluster Status for GRANT @ Fri Sep 17 16:00:42 2010 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ grant-01 1 Online, rgmanager grant-02 2 Online, rgmanager grant-03 3 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:nfs1 grant-01 started Sep 17 14:31:52 grant-03 kernel: NFSD: Using /mnt/grant1/.clumanager/nfs/v4recovery as the NFSv4 state recovery directory Sep 17 14:31:52 grant-03 kernel: NFSD: starting 90-second grace period Sep 17 14:31:53 grant-03 rgmanager[5682]: Started NFS Server GRANT nfs server Sep 17 14:31:53 grant-03 rgmanager[5716]: Adding export: *:/mnt/grant1 (rw) Sep 17 14:31:54 grant-03 rgmanager[5789]: Adding IPv4 address 10.15.89.208/24 to eth0 Sep 17 14:31:58 grant-03 in.rdiscd[5862]: setsockopt (IP_ADD_MEMBERSHIP): Address already in use Sep 17 14:31:58 grant-03 in.rdiscd[5862]: Failed joining addresses Sep 17 14:31:58 grant-03 rgmanager[4188]: Service service:nfs1 started Sep 17 14:32:06 grant-03 xinetd[1961]: START: qarsh pid=5866 from=::ffff:10.15.80.47 Sep 17 14:32:06 grant-03 qarshd[5866]: Talking to peer 10.15.80.47:33099 Sep 17 14:32:06 grant-03 qarshd[5866]: Running cmdline: clustat -x Sep 17 14:32:06 grant-03 xinetd[1961]: EXIT: qarsh status=0 pid=5866 duration=0(sec) Sep 17 14:32:41 grant-03 kernel: statd: server rpc.statd not responding, timed out Sep 17 14:32:41 grant-03 kernel: lockd: cannot monitor flea-10 Sep 17 14:33:22 grant-03 rgmanager[4188]: #37: Error receiving header from 1 sz=0 CTX 0x240a130 Sep 17 15:09:41 grant-03 rgmanager[4188]: Stopping service service:nfs1 Sep 17 15:09:41 grant-03 rgmanager[410]: Removing IPv4 address 10.15.89.208/24 from eth0 Sep 17 15:09:52 grant-03 rgmanager[485]: Removing export: *:/mnt/grant1 Sep 17 15:09:52 grant-03 rgmanager[518]: Stopping NFS daemons Sep 17 15:09:52 grant-03 mountd[5639]: Caught signal 15, un-registering and exiting. Sep 17 15:09:52 grant-03 kernel: nfsd: last server has exited, flushing export cache Sep 17 15:09:54 grant-03 rgmanager[634]: Stopping rpc.statd Sep 17 15:09:55 grant-03 rgmanager[778]: unmounting /mnt/grant1 Sep 17 15:09:55 grant-03 rgmanager[4188]: Service service:nfs1 is stopped The NFS clients had these errors: [flea-10] [mtfile_lock3] write lock failed on /mnt/grant1/flea-10/mtfile_lock3 at 204748800 for 34869: No locks available [flea-10] [mtfile_lock3] write lock failed on /mnt/grant1/flea-10/mtfile_lock3 at 204748288 for 45076: Input/output error [flea-10] [mtfile_lock3] write lock failed on /mnt/grant1/flea-10/mtfile_lock3 at 204746752 for 6220: Input/output error I'll post the entire logs from each of the three cluster nodes. Version-Release number of selected component (if applicable): Linux grant-03 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux rgmanager-3.0.12-10.el6.x86_64