Bug 1872477
| Summary: | virt-who fails to parse output from hypervisor. [rhel-7.9.z] | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Rudnei Bertol Jr. <rbertolj> | |
| Component: | virt-who | Assignee: | candlepin-bugs | |
| Status: | CLOSED ERRATA | QA Contact: | Eko <hsun> | |
| Severity: | high | Docs Contact: | ||
| Priority: | high | |||
| Version: | 7.8 | CC: | csnyder, hsun, jreznik, kuhuang, phess, redakkan, wpoteat | |
| Target Milestone: | rc | Keywords: | Reopened, Triaged, ZStream | |
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1876927 (view as bug list) | Environment: | ||
| Last Closed: | 2020-12-15 11:19:06 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1876927 | |||
The attached json does not contain characters that cause the error on my machine. Can you check the file and confirm. Thanks I edited the file to match the error described above. I do not need a new file. Any idea where the slash '/' is getting converted to url escaping? Where is this conversion of '/' to %2f happening? Is it given to us from vCenter as '/' or %2f? We try not to use virt-who to translate data if at all possible. Hey William,
I am not able to see raw file collected by the virt-who, however, I used our internal vmware creating a cluster called 'Test_1/2' and used the resource 'vmware_cluster_facts' to collect the clusters from the vCenter, and we can see that the encode came from VMware API.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PLAY [localhost] ************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [localhost]
TASK [Gather cluster info from given datacenter] ****************************************************************************************************************************
ok: [localhost]
TASK [debug] ****************************************************************************************************************************************************************
ok: [localhost] => {
"cluster_info": {
"changed": false,
"clusters": {
"Test_1%2f2": { <=================== Cluster named as 'Test_1/2', but it is being collected as 'Test_1%2f2'
"drs_default_vm_behavior": "fullyAutomated",
"drs_enable_vm_behavior_overrides": true,
"drs_vmotion_rate": 3,
"enable_ha": false,
"enabled_drs": false,
"enabled_vsan": false,
"ha_admission_control_enabled": true,
"ha_failover_level": 1,
"ha_host_monitoring": "enabled",
"ha_restart_priority": [
"medium"
],
"ha_vm_failure_interval": [
30
],
"ha_vm_max_failure_window": [
-1
],
"ha_vm_max_failures": [
3
],
"ha_vm_min_up_time": [
120
],
"ha_vm_monitoring": "vmMonitoringDisabled",
"ha_vm_tools_monitoring": [
"vmMonitoringDisabled"
],
"vsan_auto_claim_storage": false
},
"vMotion-Cluster": {
"drs_default_vm_behavior": "fullyAutomated",
"drs_enable_vm_behavior_overrides": true,
"drs_vmotion_rate": 3,
"enable_ha": false,
"enabled_drs": true,
"enabled_vsan": false,
"ha_admission_control_enabled": true,
"ha_failover_level": 1,
"ha_host_monitoring": "enabled",
"ha_restart_priority": [
"medium"
],
"ha_vm_failure_interval": [
30
],
"ha_vm_max_failure_window": [
-1
],
"ha_vm_max_failures": [
3
],
"ha_vm_min_up_time": [
120
],
"ha_vm_monitoring": "vmMonitoringDisabled",
"ha_vm_tools_monitoring": [
"vmMonitoringDisabled"
],
"vsan_auto_claim_storage": false
}
},
"failed": false
}
}
PLAY RECAP ******************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Applied the same idea applied on 'log.py' to fix the output.
~~~
]# diff /usr/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_cluster_facts.py /root/vmware_cluster_facts.py
99a100,103
> try:
> from urllib import unquote as urldecode
> except:
> from urllib.parse import unquote as urldecode
184c188
< results['clusters'][cluster.name] = dict(
---
> results['clusters'][urldecode(cluster.name)] = dict(
~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
]# ansible-playbook playbook.yml -e vcenter_password=$SENHA
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *****************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************
ok: [localhost]
TASK [Gather cluster info from given datacenter] *********************************************************************************************************************************************
ok: [localhost]
TASK [debug] *********************************************************************************************************************************************************************************
ok: [localhost] => {
"cluster_info": { <=================== Cluster named as expected.
"changed": false,
"clusters": {
"Test_1/2": {
"drs_default_vm_behavior": "fullyAutomated",
"drs_enable_vm_behavior_overrides": true,
"drs_vmotion_rate": 3,
"enable_ha": false,
"enabled_drs": false,
"enabled_vsan": false,
"ha_admission_control_enabled": true,
"ha_failover_level": 1,
"ha_host_monitoring": "enabled",
"ha_restart_priority": [
"medium"
],
"ha_vm_failure_interval": [
30
],
"ha_vm_max_failure_window": [
-1
],
"ha_vm_max_failures": [
3
],
"ha_vm_min_up_time": [
120
],
"ha_vm_monitoring": "vmMonitoringDisabled",
"ha_vm_tools_monitoring": [
"vmMonitoringDisabled"
],
"vsan_auto_claim_storage": false
},
"vMotion-Cluster": {
"drs_default_vm_behavior": "fullyAutomated",
"drs_enable_vm_behavior_overrides": true,
"drs_vmotion_rate": 3,
"enable_ha": false,
"enabled_drs": true,
"enabled_vsan": false,
"ha_admission_control_enabled": true,
"ha_failover_level": 1,
"ha_host_monitoring": "enabled",
"ha_restart_priority": [
"medium"
],
"ha_vm_failure_interval": [
30
],
"ha_vm_max_failure_window": [
-1
],
"ha_vm_max_failures": [
3
],
"ha_vm_min_up_time": [
120
],
"ha_vm_monitoring": "vmMonitoringDisabled",
"ha_vm_tools_monitoring": [
"vmMonitoringDisabled"
],
"vsan_auto_claim_storage": false
}
},
"failed": false
}
}
PLAY RECAP ***********************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
regards
rbertol
In addition, doing some test, I realized that the patch fix just the debug, the facts is still being created on the Satellite as 'Test_1%2f2', instead of 'Test_1/2'. I am looking here and it looks like that the root issue is on file 'virt/esx/esx.py' that collect the information from VMware. regards rbertol Hey guys, Just to let you know, that this issue raised this issue https://github.com/ansible-collections/vmware/issues/365 on Ansible VMware plugin. regards rbertol Red Hat Enterprise Linux 7 shipped it's final minor release on September 29th, 2020. 7.9 was the last minor releases scheduled for RHEL 7. From intial triage it does not appear the remaining Bugzillas meet the inclusion criteria for Maintenance Phase 2 and will now be closed. From the RHEL life cycle page: https://access.redhat.com/support/policy/updates/errata#Maintenance_Support_2_Phase "During Maintenance Support 2 Phase for Red Hat Enterprise Linux version 7,Red Hat defined Critical and Important impact Security Advisories (RHSAs) and selected (at Red Hat discretion) Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available." If this BZ was closed in error and meets the above criteria please re-open it flag for 7.9.z, provide suitable business and technical justifications, and follow the process for Accelerated Fixes: https://source.redhat.com/groups/public/pnt-cxno/pnt_customer_experience_and_operations_wiki/support_delivery_accelerated_fix_release_handbook Feature Requests can re-opened and moved to RHEL 8 if the desired functionality is not already present in the product. Please reach out to the applicable Product Experience Engineer[0] if you have any questions or concerns. [0] https://bugzilla.redhat.com/page.cgi?id=agile_component_mapping.html&product=Red+Hat+Enterprise+Linux+7 Apologies for the inadvertent closure. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt-who bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5444 |
Description of problem: virt-who fails to parse the output from hypervisor when the API from hypervisor decode special character to html like '/' to '%2f' Version-Release number of selected component (if applicable): ]# rpm -qa |grep virt-who virt-who-0.26.5-1.el7.noarch How reproducible: A complete reproducer will be provided on the next update. Steps to Reproduce: 1. 2. 3. Actual results: The virt-who debug command fail, to parse the json. ~~~ 2020-08-25 17:00:50,699 [virtwho.destination_8596163159926476453 DEBUG] MainProcess(16987):Thread-3 @subscriptionmanager.py:_is_rhsm_server_async:290 - Server has capability 'hypervisors_async' Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/virtwho/log.py", line 95, in emit self._queue.put_nowait(self.prepare(record)) File "/usr/lib/python2.7/site-packages/virtwho/log.py", line 87, in prepare record.msg = record.msg % record.args TypeError: not enough arguments for format string Logged from file subscriptionmanager.py, line 207 ~~~ Expected results: The virt-who debug command should show the debug. Additional info: