RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 613868 - Remove fence_virsh from luci UI since this fence is not supported with RHEL HA/Cluster
Summary: Remove fence_virsh from luci UI since this fence is not supported with RHEL H...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: luci
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Chris Feist
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-07-13 02:37 UTC by Ryan Mitchell
Modified: 2016-04-26 15:53 UTC (History)
6 users (show)

Fixed In Version: luci-0.22.2-9.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-11-10 22:11:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
screenshot when creating virsh_fence_device screen (no login field) (79.83 KB, image/png)
2010-07-13 02:37 UTC, Ryan Mitchell
no flags Details
screenshot of errior after creating virsh_fence_device. (66.21 KB, image/png)
2010-07-13 02:38 UTC, Ryan Mitchell
no flags Details

Description Ryan Mitchell 2010-07-13 02:37:43 UTC
Created attachment 431331 [details]
screenshot when creating virsh_fence_device screen (no login field)

Description of problem:
When attempting to create a shared "virsh fence agent" fencing device, the fields presented to configure this device do not include a "login" field.  Once filling out all the available fields and clicking submit, I get the error:

No value for required attribute "login" was given for fence "fencedevicename"

Version-Release number of selected component (if applicable):
$ rpm -q luci
luci-0.22.2-3.el6.x86_64
$ rpm -q cman
cman-3.0.12-9.el6.x86_64
$ uname -r
2.6.32-42.el6.x86_64

How reproducible:
Very reproducible


Steps to Reproduce:
1. Install luci on RHEL6
2. Create a cluster with at least 2 nodes (preferably virtual machines but its not important)
3. Click through to manage your cluster.  On cluster management page, choose "Fence Devices" tab.
4. Choose "Add" option.  Select "virsh fence agent" from the list.
5. Fill out the fields and click "Submit".
  
Actual results:
- No fence device is created.
- Error messages:
No value for required attribute "login" was given for fence "fencedevicename"

Expected results:
- Fence device is created.
- No error message is displayed

Additional info:
- I think we need to add a "login" field to the interface so you can specify one.

Comment 1 Ryan Mitchell 2010-07-13 02:38:36 UTC
Created attachment 431332 [details]
screenshot of errior after creating virsh_fence_device.

Comment 3 Perry Myers 2010-07-13 03:51:37 UTC
fence_virsh is not supported with either RHEL5 or RHEL6 cluster.  The fence agent is provided as a development tool only for individuals who want a fencing like device for virtual machines, but only for standalone (i.e. non-cluster usage).

Please see the support matrix at:
https://access.redhat.com/kb/docs/DOC-30003

"fence_virsh is included in RHEL5 as a test fence agent.  It is shipped and can be used by developers but it is not formally supported in conjunction with Red Hat Enterprise Linux Clustering"

Therefore this bug will be changed to remove fence_virsh from the Luci UI since it should not have been there in the first place.  The acceptable/supported virtualization fencing agents that can be used with RHEL HA/Cluster Suite are fence_xvm and fence_virt.

Comment 4 Chris Feist 2010-07-13 17:56:17 UTC
Fixed in a958cd4bc22cbac3be9bfb661fa335d265232b6c.

Comment 6 Everett Bennett 2010-07-20 12:50:12 UTC
Any recommendations on what fence device to use in RHEL 5.5 KVM situtation.  I have a RHEL 6.0 Beta 2 Host on IBM Blade for test purposes.  I currently have 4 RHEL 5.5 KVM Guests running in cluster mode.

Comment 7 Perry Myers 2010-07-20 14:35:17 UTC
(In reply to comment #6)
> Any recommendations on what fence device to use in RHEL 5.5 KVM situtation.  I
> have a RHEL 6.0 Beta 2 Host on IBM Blade for test purposes.  I currently have 4
> RHEL 5.5 KVM Guests running in cluster mode.    

If running RHEL5 clustered guests on top of a RHEL6 host, the correct fence agent to use is fence_xvm on RHEL5.  fence_xvm will work properly with fence_virtd running on the host OS.  Lon designed fence_virt/fence_virtd to be backwards compatible with fence_xvm/fence_xvmd for exactly this use case.

Note though: the configuration you are running in is a TechPreview, so it is not yet fully tested or fully supported at this time.

Comment 14 releng-rhel@redhat.com 2010-11-10 22:11:46 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.