Bug 409451 - fence_scsi: request for better error if node to fence doesn't exist
fence_scsi: request for better error if node to fence doesn't exist
Status: CLOSED CURRENTRELEASE
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: fence (Show other bugs)
4
All Linux
low Severity low
: ---
: ---
Assigned To: Ryan O'Hara
Cluster QE
:
Depends On:
Blocks: 455328
  Show dependency treegraph
 
Reported: 2007-12-03 16:40 EST by Corey Marthaler
Modified: 2009-10-13 11:51 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-10-13 11:51:01 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2007-12-03 16:40:45 EST
Description of problem:
I forgot I was using the other nic interface and it took me a lot of debugging
before I rememebered to use the correct name

[root@taft-04 ~]# fence_scsi -n taft-03
Unable to execute sg_persist (/dev/sdb1).

[root@taft-04 ~]# fence_scsi -n taft-03-e2

A message like "taft-03 doesn't exist in this cluster" would be more helpful.

Version-Release number of selected component (if applicable):
fence-1.32.50-2.fencescsi.test.patch
Comment 1 Ryan O'Hara 2008-09-04 17:03:24 EDT
The easiest way to fix this is to just check the nodeid after the script calls get_node_id() when generating the key. The get_node_id() routine does an XML query agaist the cluster.conf to get the nodeid for the nodename. So if the node is not part of the cluster, the nodeid will be zero.

The downside to this is that we can't distinguish between a missing nodeid or a nodename that doesn't exist. Is that ok? The error would simple be something like "Unable to determine nodeid for node <nodename>". Not exactly the same thing was saying "Hey! This node doesn't exist, but definitely an improvement.
Comment 2 Ryan O'Hara 2008-09-04 17:28:04 EDT
Fixed in RHEL5.

As mentioned above, the script will simply check to see the nodeid we get from the XML query of cluster.conf. If nodeid is zero, then either the node does not exist in this cluster or the nodeid is not set. Either case is invalid, so we report and error and exit.
Comment 3 Ryan O'Hara 2008-09-04 17:39:16 EDT
Sorry, meant to say fixed in RHEL4. Although it is fixed in RHEL5, too .. that is a different BZ.

Note You need to log in before you can comment on or make changes to this bug.