Bug 1314522 - hosted-engine --vm-status results into python exception
Summary: hosted-engine --vm-status results into python exception
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: ovirt-hosted-engine-ha
Classification: oVirt
Component: General
Version: 1.3.4.3
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Martin Sivák
QA Contact: Ilanit Stein
URL:
Whiteboard:
: 1314523 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-03 20:12 UTC by olovopb
Modified: 2016-03-27 08:00 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-03-27 08:00:30 UTC
oVirt Team: SLA
Embargoed:
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

Description olovopb 2016-03-03 20:12:21 UTC
Description of problem:
After fresh install and hosted-engine deployment ovirt 3.6.3, the command hosted-engine --vm-status shows error. iscsi LUN is used to store hosted-engine.
it is similar to bug 1238823 

Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 117, in <module>
    if not status_checker.print_status():
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 60, in print_status
    all_host_stats = ha_cli.get_all_host_stats()
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 160, in get_all_host_stats
    return self.get_all_stats(self.StatModes.HOST)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 107, in get_all_stats
    stats = self._parse_stats(stats, mode)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 146, in _parse_stats
    md = metadata.parse_metadata_to_dict(host_id, data)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/metadata.py", line 156, in parse_metadata_to_dict
    constants.METADATA_FEATURE_VERSION))
ovirt_hosted_engine_ha.lib.exceptions.FatalMetadataError: Metadata version 8 from host 10 too new for this agent (highest compatible version: 1)

Version-Release number of selected component (if applicable):
ovirt-hosted-engine-ha-1.3.4.3-1.el7.centos.noarch


How reproducible:


Steps to Reproduce:
1. deploy hosted engine to iscsi
2. run hosted-engine --vm-status
3.

Actual results:

Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 117, in <module>
    if not status_checker.print_status():
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 60, in print_status
    all_host_stats = ha_cli.get_all_host_stats()
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 160, in get_all_host_stats
    return self.get_all_stats(self.StatModes.HOST)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 107, in get_all_stats
    stats = self._parse_stats(stats, mode)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 146, in _parse_stats
    md = metadata.parse_metadata_to_dict(host_id, data)
  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/metadata.py", line 156, in parse_metadata_to_dict
    constants.METADATA_FEATURE_VERSION))
ovirt_hosted_engine_ha.lib.exceptions.FatalMetadataError: Metadata version 8 from host 10 too new for this agent (highest compatible version: 1)


Expected results:


Additional info:

Comment 1 Yaniv Kaul 2016-03-07 06:53:21 UTC
*** Bug 1314523 has been marked as a duplicate of this bug. ***

Comment 2 Doron Fediuck 2016-03-20 09:42:06 UTC
Hi,
according to the message:

"
Metadata version 8 from host 10 too new for this agent (highest compatible version: 1)
"

The HE agent in one or more of your hosts is not updated.
Did you make sure to update hosted engine RPMs on all relevant hypervisors?

Comment 3 olovopb 2016-03-23 13:49:10 UTC
Hello,
it was clear and new install of OS and new install of ovirt, so I assume, I have installed the newest versions of packages.

Comment 4 Martin Sivák 2016-03-23 14:17:22 UTC
This seems to be an issue with the metadata file that was not properly cleaned up during the setup. That usually happens when you use a physical disk that had some data before.

It is exactly the same problem as in the bug you referenced.

You should be able to fix this using the following procedure:

1) Stop all HA agents (they are probably down anyway)
2) Write all 0 (zeros) to the iSCSI LUN using dd if=/dev/zero of=/dev/the_proper_device bs=1M

If you can attach the ovirt-hosted-engine-setup deploy log we will take a look at why setup did not clean the file during the install (it usually does that).

Comment 5 olovopb 2016-03-23 14:22:24 UTC
Hello,
well I used nfs instead of iscsi, so I do not have logs anymore. 
As I remember I used new iscsi LUN from NAS and I tried several times to reinstall hosted engine. I also cleaned isscsi disk between reinstalls.
I used also --clean-metadata option in hosted-engine command. Nothing worked,so |decided to use NFS to store hosted engine VM.
I cannot reconfigure it to scsi now, because it is in production.

Comment 6 Martin Sivák 2016-03-23 14:46:27 UTC
Clean metadata would work if you called it in a loop with high enough number of IDs. The dd would make it a single step operation.

New iSCSI LUN does not mean there are all zeros unfortunately.

Glad to hear NFS worked for you and too bad we can't check what went wrong as we are supposed to clear the volume.

Comment 7 Doron Fediuck 2016-03-27 08:00:30 UTC
Since we cannot reproduce it closing this issue.
If you are able to reproduce and provide logs please reopen with all relevant
information.


Note You need to log in before you can comment on or make changes to this bug.