Bug 590697

Summary: Add support for a fence agent connect directly to libvirt ESX or vSphere/VirtualCenter server
Product: Red Hat Enterprise Linux 5 Reporter: Perry Myers <pmyers>
Component: cmanAssignee: Lon Hohberger <lhh>
Status: CLOSED DUPLICATE QA Contact: Cluster QE <mspqa-list>
Severity: medium Docs Contact:
Priority: medium    
Version: 5.6CC: berrange, cluster-maint, degts, edamato, liko, mbooth, mgrac, veillard
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 590696 Environment:
Last Closed: 2010-09-23 19:06:41 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 610811    
Bug Blocks:    

Description Perry Myers 2010-05-10 14:00:31 UTC
+++ This bug was initially created as a clone of Bug #590696 +++

Description of problem:
fence_vmware uses a proprietary perl API that is not in RHEL.  But the libvirt client in RHEL6 supports URIs to connect to both ESX and VirtualCenter/vSphere mgmt servers.

We should be able to point fence_virt on a virtual cluster node to point to the vSphere/VirtualCenter server for all other guests so that it can just ask the mgmt server to do the fencing for it.

This does not affect fence_virtd at all since in ESX deployments we have no host access so there will be no fence_virtd running.

This may also require backporting fence_virt or a similar agent to RHEL5 to support the RHEL5 use case.

Comment 5 Perry Myers 2010-09-23 19:06:41 UTC

*** This bug has been marked as a duplicate of bug 634567 ***