Bug 595380 - RFE: Extend support for VMware Virtual Center in libvirt ESX driver
Summary: RFE: Extend support for VMware Virtual Center in libvirt ESX driver
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.0
Hardware: All
OS: Linux
Target Milestone: rc
: ---
Assignee: Daniel Veillard
QA Contact: Virtualization Bugs
Depends On:
Blocks: 590696 595378 610811 Rhel6.1LibvirtTier2
TreeView+ depends on / blocked
Reported: 2010-05-24 13:51 UTC by Perry Myers
Modified: 2011-03-24 17:18 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
: 610811 (view as bug list)
Last Closed: 2011-03-24 17:18:12 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Perry Myers 2010-05-24 13:51:27 UTC
Description of problem:
Presently libvirt ESX driver only supports direct connections to ESX hosts.  Need to add support for ESX driver to talk to Virtual Center so that we can take advantage of VC ability to track guests through migrations across ESX hosts.

Comment 2 Daniel Berrangé 2010-08-17 12:34:37 UTC
Upstream libvirt now has the ability to manage ESX hosts via VirtualCenter. The URI syntax is:

  vpx://example-vcenter.com/dc1/srv1     (VPX over HTTPS, select ESX server 'srv1' in datacenter 'dc1')

NB, as per that example you still need to provide the hostname of the specific ESX server to be managed via VC.  We are not able to provide a connection that manages all ESX hosts at once, since it is not possible to provide the required domain ID + name uniqueness guarantees across multiple hosts. Data center wide management APIs are the realm of something like RHEV-M or DeltaCloud.

Comment 3 Perry Myers 2010-08-17 12:44:52 UTC
Re: the NB in comment # 2.

The problem is that GuestA running on ESX Host1 needs to fence GuestB.  But which Host is GuestB running on?  The guests don't have that knowledge, so they need to go to Virtual Center to get the vm poweroff command issued properly.

If this syntax means that I have to go to a random host (not necessarily the host that is running the guest) to issue the fencing operation, then that is not so bad.  Since I can just pick my own host (for example).

But if I have to explicitly pick the host that the guest is running on, then this operation isn't useful for ensuring that a guest is dead before continuing cluster operations.

If it is the case that any host will do, there's still the issue of "what if the host I pick is non-responsive".  So we've got to make our fencing agent be aware of all of the hosts in the ESX cluster and try each one in sequence?  Suboptimal, but I suppose that is something that we could do.

Comment 4 Perry Myers 2010-08-17 13:06:23 UTC
Ok, so talking to danpb here, this is what we came up with...

We shouldn't really try to use the libvirt vmware driver directly as a fence agent.  Instead we should just copy the relevant SOAP code from libvirt to rework fence_vmware to remove the VMware Perl API dependency.

danpb further mentions that as long as we know the UUID of the guests (which we will) then we should not need the ESX host name when issuing guest destroy commands through Virtual Center.

So we'll need to file a separate bug against fence-agents to rework fence_vmware.

Comment 8 Dave Allan 2011-03-24 17:18:12 UTC
As Dan noted in comment 2, we support VMware Virtual Center in the libvirt ESX driver, but data center wide management APIs are the realm of something like RHEV-M or DeltaCloud, and we don't intend to change that.  Closing as WONTFIX.  I've also changed the BZ summary slightly to reflect that subtlety.

Note You need to log in before you can comment on or make changes to this bug.