Bug 1125823
| Summary: | [VDSM mode]virt-who doesn't monitor guest changes in vdsm | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Liushihui <shihliu> |
| Component: | virt-who | Assignee: | Radek Novacek <rnovacek> |
| Status: | CLOSED CANTFIX | QA Contact: | gaoshang <sgao> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.6 | CC: | acathrow, liliu, ovasik |
| Target Milestone: | rc | Keywords: | Reopened |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-09-29 13:05:00 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
I don't think this is a problem in virt-who, it looks like it's higher in the stack. Can you please provide content of /var/log/rhsm/rhsm.log on both builds of RHEL-6.6? RHEL6.6-20140718.0-Server-x86_64(KVM), it hasn't this problem .Migrate guest from host1 to host2
On the destination host(host2), the virt-who log as the following:
2014-08-06 17:12:44,493 [DEBUG] @virtwho.py:170 - Starting infinite loop with 3600 seconds interval
2014-08-06 17:12:44,635 [DEBUG] @libvirtd.py:80 - Starting libvirt monitoring event loop
2014-08-06 17:12:44,684 [WARNING] @libvirtd.py:74 - Can't monitor libvirtd restarts due to bug in libvirt-python
2014-08-06 17:12:44,871 [INFO] @subscriptionmanager.py:109 - Sending list of uuids: []
2014-08-06 17:14:26,368 [DEBUG] @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:26,413 [INFO] @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 3}]
2014-08-06 17:14:31,774 [DEBUG] @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:31,807 [INFO] @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 1}]
2014-08-06 17:14:32,829 [DEBUG] @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:32,916 [INFO] @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 1}]
On the original host(host1),virt-who log as the following:
2014-08-06 17:14:31,313 [DEBUG] @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:31,316 [DEBUG] @libvirtd.py:137 - Virtual machine found: 6.4_Server_x86_64: b668f2c3-85e6-6a3a-385d-5a96806608af
2014-08-06 17:14:31,318 [DEBUG] @libvirtd.py:137 - Virtual machine found: 7.0_Server_x86_64: 63d00513-c2b6-81a6-a29d-1da130e67507
2014-08-06 17:14:31,320 [DEBUG] @libvirtd.py:137 - Virtual machine found: 6.5_Client_i386: 90a22163-7319-2de0-c49b-f6be6c5ddf9a
2014-08-06 17:14:31,322 [DEBUG] @libvirtd.py:137 - Virtual machine found: 5.10_Server_x86_64: 8f3dbcc1-02ce-4a73-5be6-84a4a12c3597
2014-08-06 17:14:31,377 [INFO] @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '63d00513-c2b6-81a6-a29d-1da130e67507', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '8f3dbcc1-02ce-4a73-5be6-84a4a12c3597', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '90a22163-7319-2de0-c49b-f6be6c5ddf9a', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': 'b668f2c3-85e6-6a3a-385d-5a96806608af', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 1}]
2014-08-06 17:14:31,502 [DEBUG] @libvirtd.py:131 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:31,505 [DEBUG] @libvirtd.py:137 - Virtual machine found: 6.4_Server_x86_64: b668f2c3-85e6-6a3a-385d-5a96806608af
2014-08-06 17:14:31,506 [DEBUG] @libvirtd.py:137 - Virtual machine found: 7.0_Server_x86_64: 63d00513-c2b6-81a6-a29d-1da130e67507
2014-08-06 17:14:31,507 [DEBUG] @libvirtd.py:137 - Virtual machine found: 6.5_Client_i386: 90a22163-7319-2de0-c49b-f6be6c5ddf9a
2014-08-06 17:14:31,508 [DEBUG] @libvirtd.py:137 - Virtual machine found: 5.10_Server_x86_64: 8f3dbcc1-02ce-4a73-5be6-84a4a12c3597
2014-08-06 17:14:31,550 [INFO] @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '63d00513-c2b6-81a6-a29d-1da130e67507', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '8f3dbcc1-02ce-4a73-5be6-84a4a12c3597', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '90a22163-7319-2de0-c49b-f6be6c5ddf9a', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': 'b668f2c3-85e6-6a3a-385d-5a96806608af', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 1, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 3}]
2014-08-06 17:14:32,584 [DEBUG] @libvirtd.py:137 - Virtual machine found: 6.4_Server_x86_64: b668f2c3-85e6-6a3a-385d-5a96806608af
2014-08-06 17:14:32,587 [DEBUG] @libvirtd.py:137 - Virtual machine found: 7.0_Server_x86_64: 63d00513-c2b6-81a6-a29d-1da130e67507
2014-08-06 17:14:32,589 [DEBUG] @libvirtd.py:137 - Virtual machine found: 6.5_Client_i386: 90a22163-7319-2de0-c49b-f6be6c5ddf9a
2014-08-06 17:14:32,592 [DEBUG] @libvirtd.py:137 - Virtual machine found: 6.5_Server_x86_64: 101b5fbb-6934-042c-02f8-3abf9f4b721a
2014-08-06 17:14:32,594 [DEBUG] @libvirtd.py:137 - Virtual machine found: 5.10_Server_x86_64: 8f3dbcc1-02ce-4a73-5be6-84a4a12c3597
2014-08-06 17:14:33,295 [INFO] @subscriptionmanager.py:109 - Sending list of uuids: [{'guestId': '101b5fbb-6934-042c-02f8-3abf9f4b721a', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '63d00513-c2b6-81a6-a29d-1da130e67507', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '8f3dbcc1-02ce-4a73-5be6-84a4a12c3597', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': '90a22163-7319-2de0-c49b-f6be6c5ddf9a', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}, {'guestId': 'b668f2c3-85e6-6a3a-385d-5a96806608af', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}]
===========================================================================
However, On the RHEL6.6-20140730.0-Server-x86_64(KVM), it exist this problem .Migrate guest from host1 to host2
On the destination host(host2), there isn't any message in the virt-who log during the migration
On the original host(host1),virt-who's log the same as it on RHEL6.6-20140718.0-Server-x86_64(KVM)
============================================================================
I think the only difference between the two version is on destination host.
On this host, virt-who can automatically monitor the
change during the migration process on RHEL6.6-20140718.0-Server-x86_64. but it can't do this on RHEL6.6-20140730.0-Server-x86_64
It isn't exist on virt-who-0.10-5.el6.noarch This bug has probably been fixed together with bug 1125810. Marking as duplicate. *** This bug has been marked as a duplicate of bug 1125810 *** The similar problem still exist when virt-who is running at vdsmd mode.
Description of problem:
When virt-who run at vdsmd mode, migrated guest form host1 to host2, the bonus pool subscribe in guest still exist, it should be revoke.
Version-Release number of selected component (if applicable):
subscription-manager-1.12.14-5.el6.x86_64
python-rhsm-1.12.5-2.el6.x86_64
virt-who-0.10-8.el6.noarch
katello-headpin-1.4.3.26-1.el6sam_splice.noarch
candlepin-0.9.6.5-1.el6sam.noarch
How reproducible:
Always
Steps to Reproduce:
Precondition:
Configure virt-who on two hosts running at vdsmd mode.
host1: host2:
VIRTWHO_BACKGROUND=1 VIRTWHO_BACKGROUND=1
VIRTWHO_DEBUG=1 VIRTWHO_DEBUG=1
VIRTWHO_INTERVAL=5 VIRTWHO_VDSM=1
VIRTWHO_VDSM=1
1. Register host 1 to SAM and subscribe SKU that can generate bonus pool, eg:
[root@hp-z220-05 libvirt-test-API]# subscription-manager subscribe --pool=8ac2019c4756e686014756f573bc098b
Successfully attached a subscription for: Red Hat Enterprise Linux for Virtual Datacenters, Standard
[root@hp-z220-05 libvirt-test-API]# subscription-manager list --consumed
+-------------------------------------------+
Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides:
SKU: RH00002
Contract:
Account:
Serial: 6131261104803803007
Pool ID: 8ac2019c4756e686014756f573bc098b
Active: True
Quantity Used: 1
Service Level: Standard
Service Type: L1-L3
Status Details:
Subscription Type: Stackable
Starts: 12/31/2013
Ends: 12/31/2014
System Type: Physical
2. Install guest in host 1, and subscribe the bonus pool
# subscription-manager list --consumed
+-------------------------------------------+
Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides: Oracle Java (for RHEL Server)
Red Hat Developer Toolset (for RHEL Server)
Red Hat Software Collections Beta (for RHEL Server)
Red Hat Enterprise Linux Server
Red Hat Beta
Red Hat Software Collections (for RHEL Server)
SKU: RH00050
Contract: None
Account: None
Serial: 2593852921582532659
Pool ID: 8ac2019c4756e68601479061ffea3c33
Active: True
Quantity Used: 1
Service Level: Standard
Service Type: L1-L3
Status Details:
Starts: 12/31/2013
Ends: 12/31/2014
System Type: Virtual
3. Register host2 to SAM server.Migrate guest from host 1 to host 2
[root@hp-z220-05 libvirt-test-API]# virsh migrate --live 6.5_Server_x86_64 qemu+ssh://10.66.100.111/system --undefinesource
4. Check the bonus pool in guest
# subscription-manager refresh
All local data refreshed
# subscription-manager list --consumed
Actual results:
The bonus pool still exist
# subscription-manager list --consumed
Consumed Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides: Oracle Java (for RHEL Server)
Red Hat Developer Toolset (for RHEL Server)
Red Hat Software Collections Beta (for RHEL Server)
Red Hat Enterprise Linux Server
Red Hat Beta
Red Hat Software Collections (for RHEL Server)
SKU: RH00050
Contract: None
Account: None
Serial: 2593852921582532659
Pool ID: 8ac2019c4756e68601479061ffea3c33
Active: True
Quantity Used: 1
Service Level: Standard
Service Type: L1-L3
Status Details:
Starts: 12/31/2013
Ends: 12/31/2014
System Type: Virtual
Expected results:
The bonus pool should be revoked
Additional info:
It hasn't this problem when virt-who run at libvirtd mode
This issue is caused by the fact that virt-who doesn't monitor guest changes in vdsm. It only polls and the default polling interval is one hour. After this interval (or restart of virt-who service), the bonul pool should fix automatically. Only solution of this issue would be to listen to vdsm events somehow, but I'm not sure there is an API for that. Needs more investigation. It also exist on Rhev-hypervisor6-6.6-20141119. This needs more work than I originally assumed, moving to rhel‑6.8.0. VDSM doesn't provide an API for events (or I failed to find it). It means that this bug can't be fixed and possible workaround is to decrease the monitoring interval. |
Description of problem: After migrated guest form host1 to host2, the bonus pool subscribe in guest still exist, it should be removed Version-Release number of selected component (if applicable): subscription-manager-1.12.10-1.el6.x86_64 python-rhsm-1.12.5-1.el6.x86_64 virt-who-0.10-4.el6.noarch katello-headpin-1.4.3.26-1.el6sam_splice.noarch candlepin-0.9.6.4-1.el6sam.noarch How reproducible: Always Steps to Reproduce: 1. Register host 1 to SAM and subscribe SKU that can generate bonus pool, eg: [root@hp-z220-05 libvirt-test-API]# subscription-manager subscribe --pool=8ac2019c4756e686014756f573bc098b Successfully attached a subscription for: Red Hat Enterprise Linux for Virtual Datacenters, Standard [root@hp-z220-05 libvirt-test-API]# subscription-manager list --consumed +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard Provides: SKU: RH00002 Contract: Account: Serial: 6131261104803803007 Pool ID: 8ac2019c4756e686014756f573bc098b Active: True Quantity Used: 1 Service Level: Standard Service Type: L1-L3 Status Details: Subscription Type: Stackable Starts: 12/31/2013 Ends: 12/31/2014 System Type: Physical 2. Install guest in host 1, and subscribe the bonus pool # subscription-manager list --consumed +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard Provides: Oracle Java (for RHEL Server) Red Hat Developer Toolset (for RHEL Server) Red Hat Software Collections Beta (for RHEL Server) Red Hat Enterprise Linux Server Red Hat Beta Red Hat Software Collections (for RHEL Server) SKU: RH00050 Contract: None Account: None Serial: 2593852921582532659 Pool ID: 8ac2019c4756e68601479061ffea3c33 Active: True Quantity Used: 1 Service Level: Standard Service Type: L1-L3 Status Details: Starts: 12/31/2013 Ends: 12/31/2014 System Type: Virtual 3. Migrate guest from host 1 to host 2 [root@hp-z220-05 libvirt-test-API]# virsh migrate --live 6.5_Server_x86_64 qemu+ssh://10.66.100.111/system --undefinesource 4. Check the bonus pool in guest # subscription-manager refresh All local data refreshed # subscription-manager list --consumed Actual results: The bonus pool still exist # subscription-manager list --consumed Consumed Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard Provides: Oracle Java (for RHEL Server) Red Hat Developer Toolset (for RHEL Server) Red Hat Software Collections Beta (for RHEL Server) Red Hat Enterprise Linux Server Red Hat Beta Red Hat Software Collections (for RHEL Server) SKU: RH00050 Contract: None Account: None Serial: 2593852921582532659 Pool ID: 8ac2019c4756e68601479061ffea3c33 Active: True Quantity Used: 1 Service Level: Standard Service Type: L1-L3 Status Details: Starts: 12/31/2013 Ends: 12/31/2014 System Type: Virtual Expected results: The bonus pool should be revoked Additional info: It isn't exist on RHEL6.6-20140718.0-Server-x86_64(KVM) subscription-manager-1.12.4-1.el6.x86_64 python-rhsm-1.12.4-1.el6.x86_64 virt-who-0.10-3.el6.noarch BTW,RHEL5.11 also hasn't this problem