Bug 176280 - Clustat reports "Error: no message?!"
Clustat reports "Error: no message?!"
Status: CLOSED DUPLICATE of bug 175108
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: rgmanager (Show other bugs)
4
ia64 Linux
medium Severity medium
: ---
: ---
Assigned To: Lon Hohberger
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-12-20 14:59 EST by Dennis Preston
Modified: 2009-04-16 16:19 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-01-05 17:35:40 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dennis Preston 2005-12-20 14:59:22 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.10) Gecko/20050716 Firefox/1.0.6

Description of problem:
Configured cluster with several VIP's that are unbound to any IP or ethernet port, run clustat: ----What could this error be?

[root@dev302 bin]# clustat

Error: no message?!

Member Status: Quorate

 

  Member Name                              Status

  ------ ----                              ------

  dev301                                   Online, rgmanager

  dev302                                   Online, Local, rgmanager

 

  Service Name         Owner (Last)                   State

  ------- ----         ----- ------                   -----

  192.168.0.120        (none)                         stopped

  192.168.0.119        (none)                         stopped

  192.168.0.118        (none)                         stopped

  192.168.0.117        (none)                         stopped

  192.168.0.116        (none)                         stopped

  192.168.0.115        (none)                         stopped

  192.168.0.114        (none)                         stopped

  192.168.0.113        (none)                         stopped

  192.168.0.112        (none)                         stopped

  192.168.0.111        (none)                         stopped

  192.168.0.110        (none)                         stopped

  192.168.0.109        (none)                         stopped

  192.168.0.108        (none)                         stopped

  192.168.0.107        (none)                         stopped

  192.168.0.106        (none)                         stopped

  192.168.0.105        (none)                         stopped

  192.168.0.104        (none)                         stopped

  192.168.0.103        (dev301)                       disabled

  192.168.0.102        (dev301)                       disabled

  192.168.0.101        (none)                         stopped

  192.168.0.100        (none)                         stopped

  dev3-mgmt            dev301                         started

  10.250.1.234         dev301                         started

  snapshot             dev301                         started

  email_notifier       dev302                         started



Good output looks like:

[root@dev302 bin]# clustat

Member Status: Quorate

 

  Member Name                              Status

  ------ ----                              ------

  dev301                                   Online, rgmanager

  dev302                                   Online, Local, rgmanager

 

  Service Name         Owner (Last)                   State

  ------- ----         ----- ------                   -----

  dev3-mgmt            dev301                         started

  10.250.1.234         dev301                         started

  snapshot             dev301                         started

  email_notifier       dev302                         started




Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.configure some nonsense services, ie VIPS that have no port
2.run clustat
3.
  

Additional info:
Comment 1 Lon Hohberger 2006-01-05 17:35:40 EST
I have this solved in U3 beta, or should.  Rgmanager was storing *all* resources
in its distributed database - and reporting incorrect information when it did so.




*** This bug has been marked as a duplicate of 175108 ***

Note You need to log in before you can comment on or make changes to this bug.