Bug 1641788 - virt-who configuration helper - wrong status
Summary: virt-who configuration helper - wrong status
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Virt-who Configure Plugin
Version: 6.3.4
Hardware: All
OS: All
unspecified
high
Target Milestone: Unspecified
Assignee: Marek Hulan
QA Contact: Kunxin Huang
satellite-doc-list
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-22 18:49 UTC by Waldirio M Pinheiro
Modified: 2022-03-13 15:50 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-08 13:14:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Waldirio M Pinheiro 2018-10-22 18:49:46 UTC
Description of problem:
After configuring the virt-who entries, we are able to see on the page Infrastructure - Virt-who Configurations all virt-who entries, Interval and Status. About Status, the like point to *last_report_at* what means IMHO the last time which virt-who push the information to the Satellite. This entry still fixed and should be updated every time virt-who concludes the operation.

Version-Release number of selected component (if applicable):
6.3.4

How reproducible:
100%

Steps to Reproduce:
1. Configure virt-who
2. Keep running
3. Check the Status column

Actual results:
The value still the same *creation date*

Expected results:
Update every time virt-who conclude the operation

Additional info:

Comment 1 Marek Hulan 2018-10-23 18:55:50 UTC
You're right, this is the purpose of the field. But it's only updated if there was a change on a hypervisor, otherwise virt-who doesn't send any update. If you're sure some change happened, could you please upload virt-who logs and check there's no failed/stuck foreman task?

Comment 2 Waldirio M Pinheiro 2018-10-23 21:43:55 UTC
Hello Marek,

Thanks for your response.

Doing the check on my lab.


// Virt-who stopped
---
[root@wallsat63 ~]# systemctl status virt-who
● virt-who.service - Daemon for reporting virtual guest IDs to subscription-manager
   Loaded: loaded (/usr/lib/systemd/system/virt-who.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
[root@wallsat63 ~]# 
---

// Actual report
---
[root@wallsat63 ~]# hammer virt-who-config list
---|-----------------------|---------------|--------|--------------------
ID | NAME                  | INTERVAL      | STATUS | LAST REPORT AT     
---|-----------------------|---------------|--------|--------------------
2  | vcenter02_from_helper | every 2 hours | OK     | 2018/10/06 08:16:17
1  | vcenter01_from_helper | every 2 hours | OK     | 2018/10/17 20:19:39
---|-----------------------|---------------|--------|--------------------
[root@wallsat63 ~]#
---

// Conf files (pointing to the same vCenter)
---
[root@wallsat63 virt-who.d]# ls -l *.conf
-rw-r--r--. 1 root root 1213 Apr 30 08:47 template.conf
-rw-r--r--. 1 root root  492 Aug 29 12:41 virt-who-config-1.conf
-rw-r--r--. 1 root root  492 Aug 29 12:43 virt-who-config-2.conf
[root@wallsat63 virt-who.d]# 
---

// One shot
---
# virt-who -o -d
---

// Logs
---
[root@wallsat63 rhsm]# grep Host-to rhsm.log
2018-10-23 17:24:15,742 [virtwho.destination_3885709102908748441 INFO] MainProcess(21197):Thread-4 @virt.py:_send_data:620 - Sending updated Host-to-guest mapping to "ACME" including 8 hypervisors and 141 guests
2018-10-23 17:24:15,787 [virtwho.destination_3798151044250057940 INFO] MainProcess(21197):Thread-5 @virt.py:_send_data:620 - Sending updated Host-to-guest mapping to "ACME" including 8 hypervisors and 141 guests
2018-10-23 17:24:16,057 [virtwho.destination_3885709102908748441 DEBUG] MainProcess(21197):Thread-4 @subscriptionmanager.py:hypervisorCheckIn:244 - Host-to-guest mapping being sent to 'ACME': {
2018-10-23 17:24:16,639 [virtwho.destination_3798151044250057940 DEBUG] MainProcess(21197):Thread-5 @subscriptionmanager.py:hypervisorCheckIn:244 - Host-to-guest mapping being sent to 'ACME': {
[root@wallsat63 rhsm]# 
---

// Checking the info (updated as expected, according to your explanation)
---
[root@wallsat63 rhsm]# hammer virt-who-config list
---|-----------------------|---------------|--------|--------------------
ID | NAME                  | INTERVAL      | STATUS | LAST REPORT AT     
---|-----------------------|---------------|--------|--------------------
2  | vcenter02_from_helper | every 2 hours | OK     | 2018/10/23 21:24:18
1  | vcenter01_from_helper | every 2 hours | OK     | 2018/10/23 21:24:18
---|-----------------------|---------------|--------|--------------------
[root@wallsat63 rhsm]# 
---

// One more shot
---
# virt-who -o -d
---

// One was updated and another one fail *expected the fail, we are working on that == another bug/issue*, btw the info was updated
---
[root@wallsat63 rhsm]# hammer virt-who-config list
---|-----------------------|---------------|--------|--------------------
ID | NAME                  | INTERVAL      | STATUS | LAST REPORT AT     
---|-----------------------|---------------|--------|--------------------
2  | vcenter02_from_helper | every 2 hours | OK     | 2018/10/23 21:24:18
1  | vcenter01_from_helper | every 2 hours | OK     | 2018/10/23 21:30:26
---|-----------------------|---------------|--------|--------------------
[root@wallsat63 rhsm]# 
---

// New entries on the log
---
[root@wallsat63 rhsm]# grep Host-to rhsm.log
2018-10-23 17:24:15,742 [virtwho.destination_3885709102908748441 INFO] MainProcess(21197):Thread-4 @virt.py:_send_data:620 - Sending updated Host-to-guest mapping to "ACME" including 8 hypervisors and 141 guests
2018-10-23 17:24:15,787 [virtwho.destination_3798151044250057940 INFO] MainProcess(21197):Thread-5 @virt.py:_send_data:620 - Sending updated Host-to-guest mapping to "ACME" including 8 hypervisors and 141 guests
2018-10-23 17:24:16,057 [virtwho.destination_3885709102908748441 DEBUG] MainProcess(21197):Thread-4 @subscriptionmanager.py:hypervisorCheckIn:244 - Host-to-guest mapping being sent to 'ACME': {
2018-10-23 17:24:16,639 [virtwho.destination_3798151044250057940 DEBUG] MainProcess(21197):Thread-5 @subscriptionmanager.py:hypervisorCheckIn:244 - Host-to-guest mapping being sent to 'ACME': {
2018-10-23 17:30:25,435 [INFO] @virt.py:620 - Sending updated Host-to-guest mapping to "ACME" including 8 hypervisors and 141 guests
2018-10-23 17:30:25,468 [INFO] @virt.py:620 - Sending updated Host-to-guest mapping to "ACME" including 8 hypervisors and 141 guests
[root@wallsat63 rhsm]# 
---


The main question here.
---
2018-10-23 17:30:25,468
	8 hypervisors and 141 guests

2018-10-23 17:30:25,435
	8 hypervisors and 141 guests
---

So we are seeing the same number of Hypervisors and Guests (maybe one guest moved from one hypervisor to another hypervisor), so this should be enough to update the info and then update the report. Is that correct?


Best Regards
-- 
Waldirio M Pinheiro | Senior Software Maintenance Engineer

Comment 6 Kenny Tordeurs 2019-03-26 10:42:56 UTC
Possible suggestion: Would it not be more intuitive for the "status" field to indicate when each virt-who configuration runs, regardless of any change detected in the mapping?

Maybe there should be a "status" field and a secondary field "Last run" that indicates when that particular config was last executed by virt-who?

Comment 7 Ahmed Eladawy 2019-04-04 11:42:05 UTC
As a workaround for the BUG https://bugzilla.redhat.com/show_bug.cgi?id=1603706 , using the same virt-who user for all virt-who configurations which report to the same ORG ,

After applying that , only the configuration with the original user has status date updated .

The other configurations's status do not change even if there is an update on the mapping .

vcenter1	every 1 hours	 April 04, 2019 09:35	
libvirt1	every 1 hours	 April 04, 2019 09:46 <-- The configuration with main user .

Comment 11 Bryan Kearney 2020-05-01 14:22:55 UTC
The Satellite Team is attempting to provide an accurate backlog of bugzilla requests which we feel will be resolved in the next few releases. We do not believe this bugzilla will meet that criteria, and have plans to close it out in 1 month. This is not a reflection on the validity of the request, but a reflection of the many priorities for the product. If you have any concerns about this, feel free to contact Red Hat Technical Support or your account team. If we do not hear from you, we will close this bug out. Thank you.

Comment 12 Bryan Kearney 2020-06-08 13:14:19 UTC
Thank you for your interest in Satellite 6. We have evaluated this request, and while we recognize that it is a valid request, we do not expect this to be implemented in the product in the foreseeable future. This is due to other priorities for the product, and not a reflection on the request itself. We are therefore closing this out as WONTFIX. If you have any concerns about this, please do not reopen. Instead, feel free to contact Red Hat Technical Support. Thank you.


Note You need to log in before you can comment on or make changes to this bug.