RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1506546 - Virt-who polls job status too quickly [rhel-7.3.z]
Summary: Virt-who polls job status too quickly [rhel-7.3.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virt-who
Version: 7.4
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: ---
Assignee: candlepin-bugs
QA Contact: Eko
URL:
Whiteboard:
Depends On: 1503700
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-26 10:00 UTC by Oneata Mircea Teodor
Modified: 2019-10-28 07:22 UTC (History)
6 users (show)

Fixed In Version: virt-who-0.17-15.el7_3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1503700
Environment:
Last Closed: 2017-12-13 08:03:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github virt-who virt-who pull 103 0 None None None 2017-10-26 10:00:51 UTC
Red Hat Product Errata RHBA-2017:3447 0 normal SHIPPED_LIVE virt-who bug fix update 2017-12-12 21:02:48 UTC

Description Oneata Mircea Teodor 2017-10-26 10:00:47 UTC
This bug has been copied from bug #1503700 and has been proposed to be backported to 7.3 z-stream (EUS).

Comment 3 Eko 2017-10-30 08:19:37 UTC
Check this issue with virt-who-0.17-14.el7_3, when no Retry-After setting for 429 code, the default 30 seconds is not be used, virt-who will check the job state immediately.


1). if no 429 code, virt-who will check the job state after 15 seconds [PASS]
2017-10-30 15:52:27,600 [virtwho.main DEBUG] MainProcess(6060):MainThread @executor.py:send_report:105 - Report for config "esx.conf" sent

==== waiting for 15s to check the job status ====

2017-10-30 15:52:42,621 [virtwho.main DEBUG] MainProcess(6060):MainThread @subscriptionmanager.py:_connect:121 - Authenticating with RHSM username admin
2017-10-30 15:52:42,952 [virtwho.main DEBUG] MainProcess(6060):MainThread @subscriptionmanager.py:check_report_state:227 - Checking status of job hypervisor_update_4ce60f8a-4040-42d0-899f-6b3dcab33151

2). if there is a 429 code, but no Retry-After setting, virt-who will waiting for 30 seconds to check the job state by default, but actually, no waiting, the default 30s is not be used. [FAILED]

2017-10-30 16:07:26,965 [virtwho.main DEBUG] MainProcess(6267):MainThread @executor.py:run:287 - HTTP 429 received during job polling
2017-10-30 16:07:26,969 [virtwho.main DEBUG] MainProcess(6267):MainThread @subscriptionmanager.py:_connect:121 - Authenticating with RHSM username admin

==== no waiting, not 30 seconds to retry by default ====

2017-10-30 16:07:27,329 [virtwho.main DEBUG] MainProcess(6267):MainThread @subscriptionmanager.py:check_report_state:227 - Checking status of job hypervisor_update_200b2461-117e-4123-b2a0-9e52c544697e


3). if there is a 429 code, and Retry-After is setting to 10,  virt-who will waiting for 10 seconds to check the job state. [PASS]
2017-10-30 16:11:21,783 [virtwho.main DEBUG] MainProcess(6267):MainThread @executor.py:run:287 - HTTP 429 received during job polling

==== waiting for 10 seconds as Retry-After setting ====

2017-10-30 16:11:31,798 [virtwho.main DEBUG] MainProcess(6267):MainThread @subscriptionmanager.py:_connect:121 - Authenticating with RHSM username admin
2017-10-30 16:11:32,144 [virtwho.main DEBUG] MainProcess(6267):MainThread @subscriptionmanager.py:check_report_state:227 - Checking status of job hypervisor_update_200b2461-117e-4123-b2a0-9e52c544697e


4). if there is a 500 or 404 error code, virt-who will show the error message and exit.

Comment 4 Eko 2017-11-01 03:48:55 UTC
check this with Chris, a new patch will fix this issue, mark it verified, when the new build is available, we will test it again.

Comment 6 Eko 2017-11-03 03:34:48 UTC
verified in virt-who-0.17-16.el7_3,
1. start virt-who service the first time, will wait for 15s to check the job state [PASS]
2. if 429 code received, but no Retry-After setting,will wait for 30s to check the job state [PASS]
3. if 429 code received, and set Retry-After to 10, will wait for 10s to check the job state [PASS]
4. if 404/500 code received, virt-who will be killed [PASS]
5. if two or more config files in /etc/virt-who.d, all the reports will be sent together, and then check the job state one by one(every 15s) [PASS]

Comment 8 errata-xmlrpc 2017-12-13 08:03:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3447


Note You need to log in before you can comment on or make changes to this bug.