Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
For bugs related to Red Hat Enterprise Linux 5 product line. The current stable release is 5.10. For Red Hat Enterprise Linux 6 and above, please visit Red Hat JIRA https://issues.redhat.com/secure/CreateIssue!default.jspa?pid=12332745 to report new issues.

Bug 184039

Summary: BUG in clumanager 1.2.26.1-1 using with oracle 9i or what?
Product: Red Hat Enterprise Linux 5 Reporter: Bouraoui <mohamed.bouraoui>
Component: rgmanagerAssignee: Lon Hohberger <lhh>
Status: CLOSED DUPLICATE QA Contact: Cluster QE <mspqa-list>
Severity: medium Docs Contact:
Priority: medium    
Version: 5.0CC: cluster-maint
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: 1.2.28 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2006-03-06 15:07:15 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs cluster suite none

Description Bouraoui 2006-03-05 10:08:05 UTC
Description of problem:


Version-Release number of selected component (if applicable):
clumanager-1.2.26.1-1


How reproducible:
We have set up clumanager-1.2.26.1-1, but the service cluster oracle which is
started again abruptly by the cluster, although the state of service was OK.
in below the logs of cluster in this time

Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Verified connect from member
#0 (192.168.15.11)
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Processing message on 10
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Received 20 bytes from peer
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> LOCK_MASTER_QUERY
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> replying LOCK_ACK 0
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Verified connect from member
#0 (192.168.15.11)
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Processing message on 10
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Received 188 bytes from peer
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> LOCK_LOCK | LOCK_TRYLOCK
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> lockd_trylock: member #0 lock 0
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Replying ACK
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Verified connect from member
#0 (192.168.15.11)
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Processing message on 10
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Received 188 bytes from peer
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> LOCK_UNLOCK
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> lockd_unlock: member #0 lock 1
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> ACK: lock unlocked
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Verified connect from member
#0 (192.168.15.11)
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Processing message on 10
Feb  4 14:03:07 edcpr26a clulockd[24867]: <debug> Received 20 bytes from peer
Feb  4 14:03:08 edcpr26a clulockd[24867]: <debug> LOCK_MASTER_QUERY
Feb  4 14:03:08 edcpr26a clulockd[24867]: <debug> replying LOCK_ACK 0
Feb  4 14:03:08 edcpr26a clulockd[24867]: <debug> Verified connect from member
#0 (192.168.15.11)
Feb  4 14:03:08 edcpr26a clulockd[24867]: <debug> Processing message on 10
Feb  4 14:03:08 edcpr26a clulockd[24867]: <debug> Received 188 bytes from peer
Feb  4 14:03:08 edcpr26a clulockd[24867]: <debug> LOCK_UNLOCK
Feb  4 14:03:08 edcpr26a clulockd[24867]: <debug> lockd_unlock: member #0 lock 0
Feb  4 14:03:08 edcpr26a clulockd[24867]: <debug> ACK: lock unlocked
Feb  4 14:03:08 edcpr26a clusvcmgrd[28233]: <debug> Exec of script
/usr/lib/clumanager/services/service, action stop, service oracle_cluster
Feb  4 14:03:08 edcpr26a clusvcmgrd: [28239]: <notice> service notice: Stopping
service oracle_cluster ...
Feb  4 14:03:08 edcpr26a clusvcmgrd: [28239]: <notice> service notice: Running
user script '/etc/init.d/dbora_rac stop'
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> ioctl(fd,SIOCGARP,ar [bond0]):
No such device or address
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> Connect: Member #1
(192.168.15.12) [IPv4]
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> Processing message on 10
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> Received 188 bytes from peer
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> LOCK_LOCK | LOCK_TRYLOCK
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> lockd_trylock: member #1 lock 0
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> Replying ACK
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> ioctl(fd,SIOCGARP,ar [bond0]):
No such device or address
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> Connect: Member #1
(192.168.15.12) [IPv4]
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> Processing message on 10
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> Received 188 bytes from peer
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> LOCK_UNLOCK
Feb  4 14:03:12 edcpr26a clulockd[24867]: <debug> lockd_unlock: member #1 lock 0

Steps to Reproduce:
1.
2.
3.
  
Actual results:
restart of service oracle abruptly.

Expected results:
the service oracle of cluster must be stable.

Additional info:

Comment 1 Bouraoui 2006-03-05 10:08:05 UTC
Created attachment 125665 [details]
logs cluster suite

Comment 2 Lon Hohberger 2006-03-06 15:07:15 UTC
Actually, it has nothing to do with Oracle or clumanager.  Sometimes, for no
reason at all, certain ethernet modules (*especially* e1000) fail to correctly
report information.  That is, ifconfig fails entirely, reporting no data
whatsoever.  It is actually a kernel bug.

Later kernels should actually fix the problem entirely, making the workaround
present in current versions of clumanager unnecessary.

More information + errata inforrmation in bugzilla #163636


*** This bug has been marked as a duplicate of 163636 ***

Comment 3 Nate Straz 2007-12-13 17:18:39 UTC
Moving all RHCS ver 5 bugs to RHEL 5 so we can remove RHCS v5 which never existed.