RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1125823 - [VDSM mode]virt-who doesn't monitor guest changes in vdsm
Summary: [VDSM mode]virt-who doesn't monitor guest changes in vdsm
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: virt-who
Version: 6.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Radek Novacek
QA Contact: gaoshang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-01 07:17 UTC by Liushihui
Modified: 2016-12-01 00:34 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-09-29 13:05:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Liushihui 2014-08-01 07:17:29 UTC
Description of problem:
After migrated guest form host1 to host2, the bonus pool subscribe in guest still exist, it should be removed

Version-Release number of selected component (if applicable):
subscription-manager-1.12.10-1.el6.x86_64
python-rhsm-1.12.5-1.el6.x86_64
virt-who-0.10-4.el6.noarch
katello-headpin-1.4.3.26-1.el6sam_splice.noarch
candlepin-0.9.6.4-1.el6sam.noarch

How reproducible:
Always

Steps to Reproduce:
1. Register host 1 to SAM and subscribe SKU that can generate bonus pool, eg:
[root@hp-z220-05 libvirt-test-API]# subscription-manager subscribe --pool=8ac2019c4756e686014756f573bc098b
Successfully attached a subscription for: Red Hat Enterprise Linux for Virtual Datacenters, Standard
[root@hp-z220-05 libvirt-test-API]# subscription-manager list --consumed
+-------------------------------------------+
   Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides:          
SKU:               RH00002
Contract:          
Account:           
Serial:            6131261104803803007
Pool ID:           8ac2019c4756e686014756f573bc098b
Active:            True
Quantity Used:     1
Service Level:     Standard
Service Type:      L1-L3
Status Details:    
Subscription Type: Stackable
Starts:            12/31/2013
Ends:              12/31/2014
System Type:       Physical

2. Install guest in host 1, and subscribe the bonus pool
# subscription-manager list --consumed
+-------------------------------------------+
   Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides:          Oracle Java (for RHEL Server)
                   Red Hat Developer Toolset (for RHEL Server)
                   Red Hat Software Collections Beta (for RHEL Server)
                   Red Hat Enterprise Linux Server
                   Red Hat Beta
                   Red Hat Software Collections (for RHEL Server)
SKU:               RH00050
Contract:          None
Account:           None
Serial:            2593852921582532659
Pool ID:           8ac2019c4756e68601479061ffea3c33
Active:            True
Quantity Used:     1
Service Level:     Standard
Service Type:      L1-L3
Status Details:    
Starts:            12/31/2013
Ends:              12/31/2014
System Type:       Virtual

3. Migrate guest from host 1 to host 2
[root@hp-z220-05 libvirt-test-API]# virsh migrate --live 6.5_Server_x86_64 qemu+ssh://10.66.100.111/system --undefinesource

4. Check the bonus pool in guest
# subscription-manager refresh
    All local data refreshed
# subscription-manager list --consumed


Actual results:
The bonus pool still exist
# subscription-manager list --consumed
   Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides:          Oracle Java (for RHEL Server)
                   Red Hat Developer Toolset (for RHEL Server)
                   Red Hat Software Collections Beta (for RHEL Server)
                   Red Hat Enterprise Linux Server
                   Red Hat Beta
                   Red Hat Software Collections (for RHEL Server)
SKU:               RH00050
Contract:          None
Account:           None
Serial:            2593852921582532659
Pool ID:           8ac2019c4756e68601479061ffea3c33
Active:            True
Quantity Used:     1
Service Level:     Standard
Service Type:      L1-L3
Status Details:    
Starts:            12/31/2013
Ends:              12/31/2014
System Type:       Virtual


Expected results:
The bonus pool should be revoked

Additional info:
It isn't exist on RHEL6.6-20140718.0-Server-x86_64(KVM) 
subscription-manager-1.12.4-1.el6.x86_64
python-rhsm-1.12.4-1.el6.x86_64
virt-who-0.10-3.el6.noarch

BTW,RHEL5.11 also hasn't this problem

Comment 2 Radek Novacek 2014-08-01 08:32:43 UTC
I don't think this is a problem in virt-who, it looks like it's higher in the stack.

Can you please provide content of /var/log/rhsm/rhsm.log on both builds of RHEL-6.6?

Comment 3 Liushihui 2014-08-06 09:30:33 UTC
RHEL6.6-20140718.0-Server-x86_64(KVM), it hasn't this problem .Migrate guest from host1 to host2
On the destination host(host2), the virt-who log as the following:
2014-08-06 17:12:44,493 [DEBUG]  @virtwho.py:170 - Starting infinite loop with 3600 seconds interval
2014-08-06 17:12:44,635 [DEBUG]  @libvirtd.py:80 - Starting libvirt monitoring event loop
2014-08-06 17:12:44,684 [WARNING]  @libvirtd.py:74 - Can't monitor libvirtd restarts due to bug in libvirt-python
2014-08-06 17:12:44,871 [INFO]  @subscriptionmanager.py:109 - Sending list of uuids: []
2014-08-06 17:14:26,368 [DEBUG]  @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:26,413 [INFO]  @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 3}]
2014-08-06 17:14:31,774 [DEBUG]  @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:31,807 [INFO]  @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 1}]
2014-08-06 17:14:32,829 [DEBUG]  @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:32,916 [INFO]  @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 1}]

On the original host(host1),virt-who log as the following:
2014-08-06 17:14:31,313 [DEBUG]  @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:31,316 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 6.4_Server_x86_64: b668f2c3-85e6-6a3a-385d-5a96806608af
2014-08-06 17:14:31,318 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 7.0_Server_x86_64: 63d00513-c2b6-81a6-a29d-1da130e67507
2014-08-06 17:14:31,320 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 6.5_Client_i386: 90a22163-7319-2de0-c49b-f6be6c5ddf9a
2014-08-06 17:14:31,322 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 5.10_Server_x86_64: 8f3dbcc1-02ce-4a73-5be6-84a4a12c3597
2014-08-06 17:14:31,377 [INFO]  @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '63d00513-c2b6-81a6-a29d-1da130e67507', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '8f3dbcc1-02ce-4a73-5be6-84a4a12c3597', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '90a22163-7319-2de0-c49b-f6be6c5ddf9a', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': 'b668f2c3-85e6-6a3a-385d-5a96806608af', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 1}]
2014-08-06 17:14:31,502 [DEBUG]  @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:31,505 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 6.4_Server_x86_64: b668f2c3-85e6-6a3a-385d-5a96806608af
2014-08-06 17:14:31,506 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 7.0_Server_x86_64: 63d00513-c2b6-81a6-a29d-1da130e67507
2014-08-06 17:14:31,507 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 6.5_Client_i386: 90a22163-7319-2de0-c49b-f6be6c5ddf9a
2014-08-06 17:14:31,508 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 5.10_Server_x86_64: 8f3dbcc1-02ce-4a73-5be6-84a4a12c3597
2014-08-06 17:14:31,550 [INFO]  @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '63d00513-c2b6-81a6-a29d-1da130e67507', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '8f3dbcc1-02ce-4a73-5be6-84a4a12c3597', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '90a22163-7319-2de0-c49b-f6be6c5ddf9a', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': 'b668f2c3-85e6-6a3a-385d-5a96806608af', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 3}]
2014-08-06 17:14:32,584 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 6.4_Server_x86_64: b668f2c3-85e6-6a3a-385d-5a96806608af
2014-08-06 17:14:32,587 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 7.0_Server_x86_64: 63d00513-c2b6-81a6-a29d-1da130e67507
2014-08-06 17:14:32,589 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 6.5_Client_i386: 90a22163-7319-2de0-c49b-f6be6c5ddf9a
2014-08-06 17:14:32,592 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:32,594 [DEBUG]  @libvirtd.py:137 - Virtual machine found: 5.10_Server_x86_64: 8f3dbcc1-02ce-4a73-5be6-84a4a12c3597
2014-08-06 17:14:33,295 [INFO]  @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '63d00513-c2b6-81a6-a29d-1da130e67507', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '8f3dbcc1-02ce-4a73-5be6-84a4a12c3597', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '90a22163-7319-2de0-c49b-f6be6c5ddf9a', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': 'b668f2c3-85e6-6a3a-385d-5a96806608af', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}]
===========================================================================
However, On the RHEL6.6-20140730.0-Server-x86_64(KVM), it exist this problem .Migrate guest from host1 to host2
On the destination host(host2), there isn't any message in the virt-who log during the migration
On the original host(host1),virt-who's log the same as it on RHEL6.6-20140718.0-Server-x86_64(KVM)
============================================================================
 
I think the only difference between the two version is on destination host.
On this host, virt-who can automatically monitor the 
change during the migration process on RHEL6.6-20140718.0-Server-x86_64. but it can't do this on RHEL6.6-20140730.0-Server-x86_64

Comment 4 Liushihui 2014-08-08 08:50:54 UTC
It isn't exist on virt-who-0.10-5.el6.noarch

Comment 5 Radek Novacek 2014-08-08 08:57:13 UTC
This bug has probably been fixed together with bug 1125810. Marking as duplicate.

*** This bug has been marked as a duplicate of bug 1125810 ***

Comment 6 Liushihui 2014-09-12 06:34:52 UTC
The similar problem still exist when virt-who is running at vdsmd mode. 

Description of problem:
When virt-who run at vdsmd mode, migrated guest form host1 to host2, the bonus pool subscribe in guest still exist, it should be revoke.

Version-Release number of selected component (if applicable):
subscription-manager-1.12.14-5.el6.x86_64
python-rhsm-1.12.5-2.el6.x86_64
virt-who-0.10-8.el6.noarch
katello-headpin-1.4.3.26-1.el6sam_splice.noarch
candlepin-0.9.6.5-1.el6sam.noarch

How reproducible:
Always

Steps to Reproduce:
Precondition:
Configure virt-who on two hosts running at vdsmd mode.
host1:                                  host2:
VIRTWHO_BACKGROUND=1                    VIRTWHO_BACKGROUND=1
VIRTWHO_DEBUG=1                         VIRTWHO_DEBUG=1
VIRTWHO_INTERVAL=5                      VIRTWHO_VDSM=1
VIRTWHO_VDSM=1

1. Register host 1 to SAM and subscribe SKU that can generate bonus pool, eg:
[root@hp-z220-05 libvirt-test-API]# subscription-manager subscribe --pool=8ac2019c4756e686014756f573bc098b
Successfully attached a subscription for: Red Hat Enterprise Linux for Virtual Datacenters, Standard
[root@hp-z220-05 libvirt-test-API]# subscription-manager list --consumed
+-------------------------------------------+
   Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides:          
SKU:               RH00002
Contract:          
Account:           
Serial:            6131261104803803007
Pool ID:           8ac2019c4756e686014756f573bc098b
Active:            True
Quantity Used:     1
Service Level:     Standard
Service Type:      L1-L3
Status Details:    
Subscription Type: Stackable
Starts:            12/31/2013
Ends:              12/31/2014
System Type:       Physical

2. Install guest in host 1, and subscribe the bonus pool
# subscription-manager list --consumed
+-------------------------------------------+
   Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides:          Oracle Java (for RHEL Server)
                   Red Hat Developer Toolset (for RHEL Server)
                   Red Hat Software Collections Beta (for RHEL Server)
                   Red Hat Enterprise Linux Server
                   Red Hat Beta
                   Red Hat Software Collections (for RHEL Server)
SKU:               RH00050
Contract:          None
Account:           None
Serial:            2593852921582532659
Pool ID:           8ac2019c4756e68601479061ffea3c33
Active:            True
Quantity Used:     1
Service Level:     Standard
Service Type:      L1-L3
Status Details:    
Starts:            12/31/2013
Ends:              12/31/2014
System Type:       Virtual

3. Register host2 to SAM server.Migrate guest from host 1 to host 2
[root@hp-z220-05 libvirt-test-API]# virsh migrate --live 6.5_Server_x86_64 qemu+ssh://10.66.100.111/system --undefinesource

4. Check the bonus pool in guest
# subscription-manager refresh
    All local data refreshed
# subscription-manager list --consumed


Actual results:
The bonus pool still exist
# subscription-manager list --consumed
   Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides:          Oracle Java (for RHEL Server)
                   Red Hat Developer Toolset (for RHEL Server)
                   Red Hat Software Collections Beta (for RHEL Server)
                   Red Hat Enterprise Linux Server
                   Red Hat Beta
                   Red Hat Software Collections (for RHEL Server)
SKU:               RH00050
Contract:          None
Account:           None
Serial:            2593852921582532659
Pool ID:           8ac2019c4756e68601479061ffea3c33
Active:            True
Quantity Used:     1
Service Level:     Standard
Service Type:      L1-L3
Status Details:    
Starts:            12/31/2013
Ends:              12/31/2014
System Type:       Virtual


Expected results:
The bonus pool should be revoked

Additional info:
It hasn't this problem when virt-who run at libvirtd mode

Comment 7 Radek Novacek 2014-09-12 06:54:24 UTC
This issue is caused by the fact that virt-who doesn't monitor guest changes in vdsm. It only polls and the default polling interval is one hour.

After this interval (or restart of virt-who service), the bonul pool should fix automatically.

Only solution of this issue would be to listen to vdsm events somehow, but I'm not sure there is an API for that. Needs more investigation.

Comment 9 Liushihui 2014-12-02 05:55:37 UTC
It also exist on Rhev-hypervisor6-6.6-20141119.

Comment 10 Radek Novacek 2015-04-21 06:52:14 UTC
This needs more work than I originally assumed, moving to rhel‑6.8.0.

Comment 12 Radek Novacek 2015-09-29 13:05:00 UTC
VDSM doesn't provide an API for events (or I failed to find it). It means that this bug can't be fixed and possible workaround is to decrease the monitoring interval.


Note You need to log in before you can comment on or make changes to this bug.