RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 595380 - RFE: Extend support for VMware Virtual Center in libvirt ESX driver
Summary: RFE: Extend support for VMware Virtual Center in libvirt ESX driver
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Daniel Veillard
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 590696 595378 610811 Rhel6.1LibvirtTier2
TreeView+ depends on / blocked
 
Reported: 2010-05-24 13:51 UTC by Perry Myers
Modified: 2011-03-24 17:18 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
: 610811 (view as bug list)
Environment:
Last Closed: 2011-03-24 17:18:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Perry Myers 2010-05-24 13:51:27 UTC
Description of problem:
Presently libvirt ESX driver only supports direct connections to ESX hosts.  Need to add support for ESX driver to talk to Virtual Center so that we can take advantage of VC ability to track guests through migrations across ESX hosts.

Comment 2 Daniel Berrangé 2010-08-17 12:34:37 UTC
Upstream libvirt now has the ability to manage ESX hosts via VirtualCenter. The URI syntax is:

  vpx://example-vcenter.com/dc1/srv1     (VPX over HTTPS, select ESX server 'srv1' in datacenter 'dc1')

NB, as per that example you still need to provide the hostname of the specific ESX server to be managed via VC.  We are not able to provide a connection that manages all ESX hosts at once, since it is not possible to provide the required domain ID + name uniqueness guarantees across multiple hosts. Data center wide management APIs are the realm of something like RHEV-M or DeltaCloud.

Comment 3 Perry Myers 2010-08-17 12:44:52 UTC
Re: the NB in comment # 2.

The problem is that GuestA running on ESX Host1 needs to fence GuestB.  But which Host is GuestB running on?  The guests don't have that knowledge, so they need to go to Virtual Center to get the vm poweroff command issued properly.

If this syntax means that I have to go to a random host (not necessarily the host that is running the guest) to issue the fencing operation, then that is not so bad.  Since I can just pick my own host (for example).

But if I have to explicitly pick the host that the guest is running on, then this operation isn't useful for ensuring that a guest is dead before continuing cluster operations.

If it is the case that any host will do, there's still the issue of "what if the host I pick is non-responsive".  So we've got to make our fencing agent be aware of all of the hosts in the ESX cluster and try each one in sequence?  Suboptimal, but I suppose that is something that we could do.

Comment 4 Perry Myers 2010-08-17 13:06:23 UTC
Ok, so talking to danpb here, this is what we came up with...

We shouldn't really try to use the libvirt vmware driver directly as a fence agent.  Instead we should just copy the relevant SOAP code from libvirt to rework fence_vmware to remove the VMware Perl API dependency.

danpb further mentions that as long as we know the UUID of the guests (which we will) then we should not need the ESX host name when issuing guest destroy commands through Virtual Center.

So we'll need to file a separate bug against fence-agents to rework fence_vmware.

Comment 8 Dave Allan 2011-03-24 17:18:12 UTC
As Dan noted in comment 2, we support VMware Virtual Center in the libvirt ESX driver, but data center wide management APIs are the realm of something like RHEV-M or DeltaCloud, and we don't intend to change that.  Closing as WONTFIX.  I've also changed the BZ summary slightly to reflect that subtlety.


Note You need to log in before you can comment on or make changes to this bug.