Bug 1238823
Summary: | hosted-engine --vm-status results into python exception | ||
---|---|---|---|
Product: | [oVirt] ovirt-hosted-engine-ha | Reporter: | Matteo Brancaleoni <mbrancaleoni> |
Component: | General | Assignee: | Martin Sivák <msivak> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Elad <ebenahar> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | --- | CC: | acanan, amureini, bugs, dfediuck, didi, ebenahar, ecohen, gklein, istein, lsurette, mbrancaleoni, msivak, mwest, rbalakri, rgolan, sbonazzo, yeylon, ylavi |
Target Milestone: | ovirt-3.5.5 | Keywords: | Reopened, ZStream |
Target Release: | 1.2.7.2 | Flags: | ylavi:
ovirt-3.5.z?
ylavi: planning_ack+ rule-engine: devel_ack+ rule-engine: testing_ack? |
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | sla | ||
Fixed In Version: | ovirt-hosted-engine-ha-1.2.7.2-1 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-10-26 13:43:49 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1263111 |
Description
Matteo Brancaleoni
2015-07-02 17:57:47 UTC
CC from the ovirt-users mailing list: I blindly tried the following: * check the name of the block device used for metadata * then shutdown the engine vm * then stop agent and broker on first host and finally zeroed the block device with dd if=/dev/zero of=/dev/dm-12 (dm-12 is the block device pointed by metadata file) started again broker and agent, after a while the engine was started by HA and the metadata was readable. now hosted_engine --vm-status works ok and I was able to add a 2nd node to the cluster. Also web GUI now reports Hosted Engine HA as Active Maybe the metadata block device needs to be cleared when doing iSCSI setup? Don't know if this is correct, but seems to work ok now. In 3.6.0 there will be an option to fix the metadata in case of such issues. The root cause here was a mixture of new meatadata with an older agent, which should not happen. Actually, the iSCSI volume needs to be wiped out before we start using it as VDSM does not do that automatically. Patch has been merged, please move to modified if no other change is requred. Can you please add steps to reproduce? ISCSI storage specific? centOS specific? iSCSI specific. The reproducer is quite simple. Standard hosted engine installation is needed and a non-clean iSCSI disk has to be used for the storage. Or take an existing install and fill the ha_agent.metadata with random data. Bug tickets that are moved to testing must have target release set to make sure tester knows what to test. Please set the correct target release before moving to ON_QA. Deployment over iSCSI non-clean LUN finished successfully with vt17.3. Used the following: ovirt-hosted-engine-ha-1.2.7.2-1.el7ev.noarch ovirt-hosted-engine-setup-1.2.6.1-1.el7ev.noarch. oVirt 3.5.5 has been released including fixes for this issue. (In reply to Elad from comment #8) > Deployment over iSCSI non-clean LUN finished successfully with vt17.3. Used > the following: > ovirt-hosted-engine-ha-1.2.7.2-1.el7ev.noarch > ovirt-hosted-engine-setup-1.2.6.1-1.el7ev.noarch. Any chance to find out how this bug was verified? I am pretty certain that the fix is wrong for iSCSI at least and wonder if it was really verified. See also bug 1346341 and likely also bug 1314522 and other similar reports. The bug was verified according to the steps in comment 6 (In reply to Elad from comment #11) > The bug was verified according to the steps in comment 6 How did you force a non-clean disk? (In reply to Yedidyah Bar David from comment #12) > (In reply to Elad from comment #11) > > The bug was verified according to the steps in comment 6 > > How did you force a non-clean disk? Deployed HE and re-deployed over the same LUN. OK. For bug 1346341 we'll provide more detailed instructions. Thanks! |