Bug 1314902 - 2 virt-who reporting on the same hypervisor using local libvirt and remote libvirt methods creates duplicate systems [NEEDINFO]
2 virt-who reporting on the same hypervisor using local libvirt and remote l...
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: virt-who (Show other bugs)
Unspecified Unspecified
medium Severity medium
: rc
: ---
Assigned To: Chris Snyder
: Reopened, Triaged
Depends On:
Blocks: 1227986
  Show dependency treegraph
Reported: 2016-03-04 14:50 EST by Barnaby Court
Modified: 2018-05-16 10:30 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2017-12-06 05:54:44 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
toneata: needinfo? (jsefler)

Attachments (Terms of Use)

  None (edit)
Description Barnaby Court 2016-03-04 14:50:19 EST
Description of problem:
Use two virt-who to monitor the same host, it will generate two kinds of hostnames. Firstly,when virt-who run at local libvirt mode, it will send the local hostname to SAM. However, if use another virt-who running at remote libvirt to monitor the previous host, it will send the hypervisor uuid to SAM. Therefore, you will see the same host with two different names on SAM/Satellite WebUI

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.In the host1, configure virt-who run at libvirt mode,restart virt-who service
[root@hp-z220-05 rhsm]# cat /etc/sysconfig/virt-who
[root@hp-z220-05 rhsm]# tail -f /var/log/rhsm/rhsm.log
[root@hp-z220-05 rhsm]# service virt-who restart
2. check the host/guest association info in the /var/log/rhsm/rhsm.log
2015-06-04 10:10:41,478 [INFO]  @subscriptionmanager.py:116 - Sending domain info: [{'guestId': '137a41f8-a6e1-9ffe-a537-656d6dd1e8c5', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5},]
2015-06-04 10:10:42,141 [INFO]  @virtwho.py:129 - virt-who guest list update successful
3. Register the guest to SAM/Satellite
4. Check the SAM/Satellite webUI, it will show guest's host is "host1name"
5. In the host2, configure virt-who run at remote libvirt mode to monitor host1, restart virt-who service
6. Check the host/guest association info in the /var/log/rhsm/rhsm.log
2015-06-04 10:20:26,379 [INFO]  @subscriptionmanager.py:123 - Sending update in hosts-to-guests mapping: {'003ac27c-ed28-e211-8973-10604b5b2b19': [{'guestId': '3394b8fa-a27b-43e0-fe39-82c587a80acd', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 1}, {'guestId': '69b59f9e-3e09-d8de-ac7e-c35c6e62f292', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '538cde5f-936e-9ca0-58ee-7528bbf86c2b', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '137a41f8-a6e1-9ffe-a537-656d6dd1e8c5', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '3c431b66-fbab-9cce-be3c-bc15ab2adf84', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}]}
7. In the SAM/Satellite webUI, Check the guest's host

Actual results:
After step6, it will generate a hypervisor uuid "003ac27c-ed28-e211-8973-10604b5b2b19"
After step7, In the SAM/Satellite WebUI, it will generate a new host "003ac27c-ed28-e211-8973-10604b5b2b19".Meanwhile, guest's host has been modified from "host1name" to "003ac27c-ed28-e211-8973-10604b5b2b19"

Expected results:
As virt-who is used to monitor the same machine, Whatever mode it is running, it should send the same hostname to SAM/Satellite. it shouldn't generate a new hypervisor uuid when virt-who run at remote libvirt mode.

Additional info:
Comment 1 Chris Snyder 2016-03-17 14:56:47 EDT
The cause of this issue is that when virt-who reports on a local system (in libvirt mode) the information gathered about guests running on the system is reported to candlepin through the use of a PUT to /consumers/<consumer_uuid> (The updateConsumer method in ConsumerResource.java and the updateConsumer method in python-rhsm/connection.py). There is no hypervisor id reported by virt-who in this instance (although the updateConsumer method of connection.py supports it).

When virt-who reports on a remote system (still operating in libvirt mode) the information it gathers is nearly identical except that the remote system's uuid is gathered and reported as the hypervisor id. This information is passed through to an entirely different endpoint in candlepin (the method is POST /hypervisors/<owner> which maps to HypervisorUpdate() (or HypervisorUpdateAsync() depending on what the content-type header is)).

The HypervisorUpdate* methods look for existing consumers for a given hypervisor by looking for consumers with a hypervisor Id matching those in the report it is sent. By default, a consumer is created if one is not found. If guestIds are reported to have been previously attached to a consumer that does not have the given hypervisorId (in the report currently being processed) they are removed from that consumer.

In the case described above candlepin does not have the necessary information to make the connection between the locally reported consumer and the one reported remotely, consequently a new one is created (without the hostname, as that info had not been used to create the new hypervisor consumer), and the guest ids are removed from the first one.

In my opinion the solution to this bug would be for virt-who to report the hypervisorId in either case (both local and remote libvirt modes).
Comment 4 Mike McCune 2016-03-28 18:26:18 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 5 Barnaby Court 2016-06-14 10:24:23 EDT
Fixed in tag virt-who-0.17-1
Comment 9 Jan Kurik 2017-12-06 05:54:44 EST
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:


This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:


Note You need to log in before you can comment on or make changes to this bug.