RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1195939 - virt-who connecting to RHEVM registers Gluster nodes with Satellite
Summary: virt-who connecting to RHEVM registers Gluster nodes with Satellite
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virt-who
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Radek Novacek
QA Contact: Li Bin Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-02-24 21:21 UTC by Paul Armstrong
Modified: 2016-12-01 00:33 UTC (History)
5 users (show)

Fixed In Version: virt-who-0.14-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 11:56:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
added code to select only nodes from virt_service enabled clusters for RHEVM (3.98 KB, patch)
2015-02-24 22:31 UTC, Paul Armstrong
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2370 0 normal SHIPPED_LIVE virt-who bug fix and enhancement update 2015-11-19 10:39:27 UTC

Description Paul Armstrong 2015-02-24 21:21:45 UTC
Description of problem:
virt-who registers Gluster Nodes to Satellite. This results in a double registration as a hypervisor and a managed Content Host. 

Version-Release number of selected component (if applicable):
RHEL 7.0
RHEV 3.5
Satellite 6.0.8


How reproducible:
Always

Steps to Reproduce:
1. run virt-who in a RHEV environment with Gluster nodes
2.
3.

Actual results:
gluster nodes are registered in Satellite

Expected results:
gluster nodes should not be registered in Satellite as they are not currently allowed to run Guests and should not be treated as Hypervisors.

Additional info:
Issue is in /usr/share/virt-who/virt/rhevm/virt.py

        hosts_xml = ElementTree.parse(self.get(self.hosts_url))
        vms_xml = ElementTree.parse(self.get(self.vms_url))

        for host in hosts_xml.findall('host'):
            id = host.get('id')
            mapping[id] = []

The above section should filter hosts_xml based on comparing hosts clusterid with clusterid of clusters where <virt_service>true</virt_service>
by definition, this will be false today for clusters that have gluster nodes. If at some time in the future gluster nodes can run VMs virt_service will be true and they should be properly included.

Satellite or other products will at that time need to combine any information that they gather from virt-who much as they have to for libvirt hosts today.

Comment 1 Paul Armstrong 2015-02-24 22:31:01 UTC
Created attachment 994958 [details]
added code to select only nodes from virt_service enabled clusters for RHEVM

  self.username = self.config.username
  self.password = self.config.password

+ self.clusters_url = urlparse.urljoin(self.url, "/api/clusters")
  self.hosts_url = urlparse.urljoin(self.url, "/api/hosts")
  self.vms_url = urlparse.urljoin(self.url, "/api/vms")

...
...

  mapping = {}
+ clusters = {}

+  clusters_xml = ElementTree.parse(self.get(self.clusters_url))
   hosts_xml = ElementTree.parse(self.get(self.hosts_url))
   vms_xml = ElementTree.parse(self.get(self.vms_url))

+  for cluster in clusters_xml.findall('cluster'):
+     cluster_id = cluster.get('id')
+     virt_service = cluster.find('virt_service').text
+     if virt_service == 'true':
+       clusters[cluster_id] = []
					    
   for host in hosts_xml.findall('host'):
            id = host.get('id')
+           host_cluster = host.get('cluster')
+           host_cluster_id = host_cluster.get('id')
+           if host_cluster_id in clusters.keys():
            	mapping[id] = []

Comment 3 Paul Armstrong 2015-02-25 19:54:30 UTC
for host in hosts_xml.findall('host'):
  id = host.get('id')
- host_cluster = host.get('cluster')
- host_cluster_id = host_cluster.get('id')
+ host_cluster = host.findall('cluster')
+ host_cluster_id = host_cluster[0].get('id')
  if host_cluster_id in clusters.keys():
    mapping[id] = []

(There will be only one cluster object and that object must exist)

Comment 4 Paul Armstrong 2015-02-25 21:24:11 UTC
Sorry /usr/share/virt-who/virt/rhevm/rhevm.py

Comment 5 Radek Novacek 2015-06-18 12:13:14 UTC
Thanks for the patch, I've applied it upstream and it will be part of rebased version of virt-who.

https://github.com/virt-who/virt-who/commit/c579c6cffd355cba48d89b6083ca317d785d5892

Comment 6 Radek Novacek 2015-06-23 13:33:52 UTC
Fixed in virt-who-0.14-1.el7.

Comment 8 Liushihui 2015-08-05 09:44:23 UTC
Verified it on virt-who-0.14-2.el7.noarch since virt-who hasn't send the gluster node to satellite.Therefore, verify it.

Verified version
virt-who-0.14-2.el7.noarch
subscription-manager-1.15.6-1.el7.x86_64
python-rhsm-1.15.3-1.el7.x86_64
SAM-1.4.1-RHEL-6-20141113.0

Precondition:
1 Deploy satellite6.0.8
2 Deploy RHEVM3.5.4, Then create cluster which has enable Gluster service, Then add two hosts to this gluster
3 Deploy gluster server 

Verified process:
1 Register system to satellite6.0.8
2 Configure virt-who run at rhevm mode to monitor rhevm
# cat /etc/sysconfig/virt-who  | grep -v ^# | grep -v ^$
VIRTWHO_BACKGROUND=1
VIRTWHO_DEBUG=1
VIRTWHO_RHEVM=1
VIRTWHO_RHEVM_OWNER=ACME_Corporation
VIRTWHO_RHEVM_ENV=Library
VIRTWHO_RHEVM_SERVER=https://10.66.79.83:443
VIRTWHO_RHEVM_USERNAME=admin@internal
VIRTWHO_RHEVM_PASSWORD=redhat
3. Restart virt-who system and check virt-who's log
# systemctl restart virt-who
# tail -f /var/log/rhsm/rhsm.log
2015-08-05 17:36:25,255 [INFO]  @virtwho.py:655 - Using configuration "env/cmdline" ("rhevm" mode)
2015-08-05 17:36:25,268 [DEBUG]  @virtwho.py:203 - Starting infinite loop with 3600 seconds interval
2015-08-05 17:36:25,771 [DEBUG]  @rhevm.py:123 - Cluster of host a24571ac-a983-425c-b9a9-25d34fa8c49a is not virt_service, skipped
2015-08-05 17:36:25,772 [DEBUG]  @rhevm.py:123 - Cluster of host ff01f1a7-76ca-4d32-8201-d509057a3b05 is not virt_service, skipped
2015-08-05 17:36:25,782 [DEBUG]  @subscriptionmanager.py:112 - Authenticating with certificate: /etc/pki/consumer/cert.pem
2015-08-05 17:36:30,376 [DEBUG]  @subscriptionmanager.py:146 - Checking if server has capability 'hypervisor_async'
2015-08-05 17:36:34,953 [DEBUG]  @subscriptionmanager.py:158 - Server does not have 'hypervisors_async' capability
2015-08-05 17:36:34,953 [INFO]  @subscriptionmanager.py:165 - Sending update in hosts-to-guests mapping: {}

Result:
virt-who hasn't send the host/guest mapping to satellite since Cluster of these two hosts are not virt_service

Comment 9 errata-xmlrpc 2015-11-19 11:56:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2370.html


Note You need to log in before you can comment on or make changes to this bug.