Bug 595380 - RFE: Extend support for VMware Virtual Center in libvirt ESX driver
RFE: Extend support for VMware Virtual Center in libvirt ESX driver
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.0
All Linux
low Severity medium
: rc
: ---
Assigned To: Daniel Veillard
Virtualization Bugs
: FutureFeature
Depends On:
Blocks: 590696 595378 610811 Rhel6.1LibvirtTier2
  Show dependency treegraph
 
Reported: 2010-05-24 09:51 EDT by Perry Myers
Modified: 2011-03-24 13:18 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
: 610811 (view as bug list)
Environment:
Last Closed: 2011-03-24 13:18:12 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Perry Myers 2010-05-24 09:51:27 EDT
Description of problem:
Presently libvirt ESX driver only supports direct connections to ESX hosts.  Need to add support for ESX driver to talk to Virtual Center so that we can take advantage of VC ability to track guests through migrations across ESX hosts.
Comment 2 Daniel Berrange 2010-08-17 08:34:37 EDT
Upstream libvirt now has the ability to manage ESX hosts via VirtualCenter. The URI syntax is:

  vpx://example-vcenter.com/dc1/srv1     (VPX over HTTPS, select ESX server 'srv1' in datacenter 'dc1')

NB, as per that example you still need to provide the hostname of the specific ESX server to be managed via VC.  We are not able to provide a connection that manages all ESX hosts at once, since it is not possible to provide the required domain ID + name uniqueness guarantees across multiple hosts. Data center wide management APIs are the realm of something like RHEV-M or DeltaCloud.
Comment 3 Perry Myers 2010-08-17 08:44:52 EDT
Re: the NB in comment # 2.

The problem is that GuestA running on ESX Host1 needs to fence GuestB.  But which Host is GuestB running on?  The guests don't have that knowledge, so they need to go to Virtual Center to get the vm poweroff command issued properly.

If this syntax means that I have to go to a random host (not necessarily the host that is running the guest) to issue the fencing operation, then that is not so bad.  Since I can just pick my own host (for example).

But if I have to explicitly pick the host that the guest is running on, then this operation isn't useful for ensuring that a guest is dead before continuing cluster operations.

If it is the case that any host will do, there's still the issue of "what if the host I pick is non-responsive".  So we've got to make our fencing agent be aware of all of the hosts in the ESX cluster and try each one in sequence?  Suboptimal, but I suppose that is something that we could do.
Comment 4 Perry Myers 2010-08-17 09:06:23 EDT
Ok, so talking to danpb here, this is what we came up with...

We shouldn't really try to use the libvirt vmware driver directly as a fence agent.  Instead we should just copy the relevant SOAP code from libvirt to rework fence_vmware to remove the VMware Perl API dependency.

danpb further mentions that as long as we know the UUID of the guests (which we will) then we should not need the ESX host name when issuing guest destroy commands through Virtual Center.

So we'll need to file a separate bug against fence-agents to rework fence_vmware.
Comment 8 Dave Allan 2011-03-24 13:18:12 EDT
As Dan noted in comment 2, we support VMware Virtual Center in the libvirt ESX driver, but data center wide management APIs are the realm of something like RHEV-M or DeltaCloud, and we don't intend to change that.  Closing as WONTFIX.  I've also changed the BZ summary slightly to reflect that subtlety.

Note You need to log in before you can comment on or make changes to this bug.