Bug 176280 - Clustat reports "Error: no message?!"
Summary: Clustat reports "Error: no message?!"
Status: CLOSED DUPLICATE of bug 175108
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: rgmanager   
(Show other bugs)
Version: 4
Hardware: ia64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Lon Hohberger
QA Contact: Cluster QE
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2005-12-20 19:59 UTC by Dennis Preston
Modified: 2009-04-16 20:19 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-01-05 22:35:40 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

Description Dennis Preston 2005-12-20 19:59:22 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.10) Gecko/20050716 Firefox/1.0.6

Description of problem:
Configured cluster with several VIP's that are unbound to any IP or ethernet port, run clustat: ----What could this error be?

[root@dev302 bin]# clustat

Error: no message?!

Member Status: Quorate

 

  Member Name                              Status

  ------ ----                              ------

  dev301                                   Online, rgmanager

  dev302                                   Online, Local, rgmanager

 

  Service Name         Owner (Last)                   State

  ------- ----         ----- ------                   -----

  192.168.0.120        (none)                         stopped

  192.168.0.119        (none)                         stopped

  192.168.0.118        (none)                         stopped

  192.168.0.117        (none)                         stopped

  192.168.0.116        (none)                         stopped

  192.168.0.115        (none)                         stopped

  192.168.0.114        (none)                         stopped

  192.168.0.113        (none)                         stopped

  192.168.0.112        (none)                         stopped

  192.168.0.111        (none)                         stopped

  192.168.0.110        (none)                         stopped

  192.168.0.109        (none)                         stopped

  192.168.0.108        (none)                         stopped

  192.168.0.107        (none)                         stopped

  192.168.0.106        (none)                         stopped

  192.168.0.105        (none)                         stopped

  192.168.0.104        (none)                         stopped

  192.168.0.103        (dev301)                       disabled

  192.168.0.102        (dev301)                       disabled

  192.168.0.101        (none)                         stopped

  192.168.0.100        (none)                         stopped

  dev3-mgmt            dev301                         started

  10.250.1.234         dev301                         started

  snapshot             dev301                         started

  email_notifier       dev302                         started



Good output looks like:

[root@dev302 bin]# clustat

Member Status: Quorate

 

  Member Name                              Status

  ------ ----                              ------

  dev301                                   Online, rgmanager

  dev302                                   Online, Local, rgmanager

 

  Service Name         Owner (Last)                   State

  ------- ----         ----- ------                   -----

  dev3-mgmt            dev301                         started

  10.250.1.234         dev301                         started

  snapshot             dev301                         started

  email_notifier       dev302                         started




Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.configure some nonsense services, ie VIPS that have no port
2.run clustat
3.
  

Additional info:

Comment 1 Lon Hohberger 2006-01-05 22:35:40 UTC
I have this solved in U3 beta, or should.  Rgmanager was storing *all* resources
in its distributed database - and reporting incorrect information when it did so.




*** This bug has been marked as a duplicate of 175108 ***


Note You need to log in before you can comment on or make changes to this bug.