Bug 1640155
| Summary: | [DR] RHV failover of VMs to secondary site fails | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-ansible-collection | Reporter: | SATHEESARAN <sasundar> | ||||||
| Component: | disaster-recovery | Assignee: | Tal Nisan <tnisan> | ||||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Elad <ebenahar> | ||||||
| Severity: | urgent | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | 1.1.10 | CC: | bugs, ebenahar, gpulido, mkalinin, mperina, rhs-bugs, sabose, sankarshan, tnisan | ||||||
| Target Milestone: | ovirt-4.2.7-1 | Keywords: | Regression, TestBlocker | ||||||
| Target Release: | 1.1.3 | Flags: | rule-engine:
ovirt-4.2+
rule-engine: blocker+ |
||||||
| Hardware: | x86_64 | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | ovirt-ansible-disaster-recovery-1.1.3 | Doc Type: | If docs needed, set a value | ||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | 1640139 | Environment: |
hc
|
||||||
| Last Closed: | 2018-11-13 16:12:55 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Bug Depends On: | |||||||||
| Bug Blocks: | 1640139 | ||||||||
| Attachments: |
|
||||||||
|
Description
SATHEESARAN
2018-10-17 12:44:46 UTC
Here is the error reported
<snip>
2018-10-16 16:39:50,600 p=30924 u=root | TASK [oVirt.disaster-recovery : Recover target engine] **************************************************
2018-10-16 16:39:50,600 p=30924 u=root | task path: /usr/share/ansible/roles/oVirt.disaster-recovery/tasks/main.yml:19
2018-10-16 16:39:50,633 p=30924 u=root | Read vars_file 'disaster_recovery_vars.yml'
2018-10-16 16:39:50,633 p=30924 u=root | Read vars_file 'passwords.yml'
2018-10-16 16:39:50,648 p=30924 u=root | fatal: [localhost]: FAILED! => {
"reason": "Invalid options for include_tasks: storage\n\nThe error appears to have been in '/usr/share/ansible/roles/oVirt.disaster-recovery/tasks/recover_engine.yml': line 42, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n # domain (which will make another storage domain as master instead).\n - name: Add master storage domain to the setup\n ^ here\n"
}
2018-10-16 16:39:50,650 p=30924 u=root | to retry, use: --limit @/usr/share/ansible/roles/oVirt.disaster-recovery/files/failover.retry
2018-10-16 16:39:50,650 p=30924 u=root | PLAY RECAP **********************************************************************************************
</snip>
ansible-2.7.0-0.4.rc4.el7ae.noarch ovirt-ansible-disaster-recovery-1.1.2-1.el7ev.noarch (In reply to SATHEESARAN from comment #2) > ansible-2.7.0-0.4.rc4.el7ae.noarch > ovirt-ansible-disaster-recovery-1.1.2-1.el7ev.noarch Tested with the above said components Created attachment 1494841 [details]
ansible.log
Created attachment 1494842 [details]
mapping file used for failover
Which milestone is this bug targeted to? I assume that the closest one should be 4.2.7 so target it to this milestone Maor, is there a workaround to this issue (till 4.2.8 is released)? You can use ansible 2.6.x until the fix will be published. Elad, I know that you had doubts about this bug will be tested for 4.2.7, is there any chance we can still push it? Yes, we will probably get a respin for 4.2.7 for the fix here. (In reply to Maor from comment #9) > You can use ansible 2.6.x until the fix will be published. Downgrading to ansible 2.6.x is not an option as all other features are tested with 2.7 (and we would anyways always get the latest ansible in channel) > Elad, I know that you had doubts about this bug will be tested for 4.2.7, is > there any chance we can still push it? Failover with Gluster domain succeeded, domain was imported successfully to the secondary site:
TASK [oVirt.disaster-recovery : Recover target engine] ************************************************************************************************************************************************************
task path: /usr/share/ansible/roles/oVirt.disaster-recovery/tasks/main.yml:19
included: /usr/share/ansible/roles/oVirt.disaster-recovery/tasks/recover_engine.yml for localhost
TASK [oVirt.disaster-recovery : Obtain SSO token] *****************************************************************************************************************************************************************
task path: /usr/share/ansible/roles/oVirt.disaster-recovery/tasks/recover_engine.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1541341086.35-217143828950081 `" && echo ansible-tmp-1541341086.35-217143828950081="` echo /root/.ansible/tmp/ansible-tm
p-1541341086.35-217143828950081 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/ovirt/ovirt_auth.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-21392Ohs45r/tmpFWuT_9 TO /root/.ansible/tmp/ansible-tmp-1541341086.35-217143828950081/AnsiballZ_ovirt_auth.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1541341086.35-217143828950081/ /root/.ansible/tmp/ansible-tmp-1541341086.35-217143828950081/AnsiballZ_ovirt_auth.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1541341086.35-217143828950081/AnsiballZ_ovirt_auth.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1541341086.35-217143828950081/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
.
.
.
TASK [oVirt.disaster-recovery : Add storage domain if Gluster] ****************************************************************************************************************************************************
task path: /usr/share/ansible/roles/oVirt.disaster-recovery/tasks/recover/add_domain.yml:19
included: /usr/share/ansible/roles/oVirt.disaster-recovery/tasks/recover/add_glusterfs_domain.yml for localhost
TASK [oVirt.disaster-recovery : Add Gluster storage domain] *******************************************************************************************************************************************************
task path: /usr/share/ansible/roles/oVirt.disaster-recovery/tasks/recover/add_glusterfs_domain.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
==========================================================
Used:
ovirt-ansible-disaster-recovery-1.1.3-1.el7ev.noarch
ovirt-ansible-roles-1.1.5-2.el7ev.noarch
ansible-2.7.1-1.el7ae.noarch
ovirt-engine-4.2.7.4-0.1.el7ev.noarch
Thanks kgoldbla for your help!
This bugzilla is included in oVirt 4.2.7 Async 1 release, published on November 13th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |