Bug 1661932 - Restore on Gluster failed with "[Problem while trying to mount target]". HTTP response code is 400.
Summary: Restore on Gluster failed with "[Problem while trying to mount target]". HTTP...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-hosted-engine-setup
Version: 4.2.8
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ovirt-4.3.0
: ---
Assignee: Simone Tiraboschi
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-24 15:54 UTC by Nikolai Sednev
Modified: 2019-01-08 12:57 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-08 12:57:51 UTC
oVirt Team: Integration
Target Upstream Version:


Attachments (Terms of Use)
logs from host puma18 (12.74 MB, application/gzip)
2018-12-27 10:51 UTC, Nikolai Sednev
no flags Details

Description Nikolai Sednev 2018-12-24 15:54:53 UTC
Description of problem:
Restore on Gluster failed with "[Problem while trying to mount target]". HTTP response code is 400.

[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "deprecations": [{"msg": "The 'ovirt_storage_domains' module is being renamed 'ovirt_storage_domain'", "version": 2.8}], "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."}

During bare metal restore of the engine on HE over gluster using "hosted-engine --deploy --restore-from-file=/root/nsednev_from_baremetalVM_rhevm_4_2_8" had failed. In case that restore is made over NFS it succeeds.

Version-Release number of selected component (if applicable):
ovirt-hosted-engine-ha-2.2.19-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.32-1.el7ev.noarch
rhvm-appliance.noarch 2:4.2-20181212.0.el7


How reproducible:
100%

Steps to Reproduce:
1.Redeploy on gluster from bare metal environment, where SPM is not what we have in the backup && power management not configured.

Actual results:
Restore fails.

Expected results:
Restore should succeed.

Additional info:
Logs from puma18 which was not SPM host at the restore process.

Comment 2 Yedidyah Bar David 2018-12-26 10:11:23 UTC
Please attach relevant logs. Can be sosreport. Thanks.

Comment 3 Nikolai Sednev 2018-12-26 11:29:03 UTC
(In reply to Yedidyah Bar David from comment #2)
> Please attach relevant logs. Can be sosreport. Thanks.

Yes, I know that logs are missing, the issue requires deeper investigation and I'm still working on it.
Once I'll get results, logs will be added.

Comment 4 Nikolai Sednev 2018-12-27 10:51:35 UTC
Created attachment 1517038 [details]
logs from host puma18

Comment 5 Nikolai Sednev 2018-12-27 10:53:09 UTC
http://pastebin.test.redhat.com/688431 Deployment flow with details.

Comment 7 Nikolai Sednev 2019-01-07 10:38:10 UTC
gluster01 ~]# glusterfs --version
glusterfs 5.1
Repository revision: git://git.gluster.org/glusterfs.git
Tested on gluster01.scl.lab.tlv.redhat.com.

Comment 8 Raz Tamir 2019-01-07 11:33:58 UTC
There's a bug on this gluster cluster gluster01.scl.lab.tlv.redhat.com.

Please re-test on gluster01.lab.eng.tlv2.redhat.com (RHV QE infra gluster cluster if you need help creating volumes)

Comment 9 Nikolai Sednev 2019-01-07 14:02:49 UTC
(In reply to Raz Tamir from comment #8)
> There's a bug on this gluster cluster gluster01.scl.lab.tlv.redhat.com.
> 
> Please re-test on gluster01.lab.eng.tlv2.redhat.com (RHV QE infra gluster
> cluster if you need help creating volumes)

It was working just fine for regular deployments until now.
gluster01.lab.eng.tlv2.redhat.com is glusterfs 3.12.6.
Should this be fine?

Comment 13 Nikolai Sednev 2019-01-08 12:57:51 UTC
Works fine with glusterfs 3.12.6

Tested on:
ovirt-engine-setup-4.2.8.1-0.1.el7ev.noarch
glusterfs 3.12.6
ovirt-hosted-engine-ha-2.2.19-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.32-1.el7ev.noarch
rhvm-appliance-4.2-20181212.0.el7.noarch
Red Hat Enterprise Linux Server release 7.6 (Maipo)
Linux 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 15 17:36:42 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Previous failure was due to gluster 5.1 known bug.

Moving to closed as works for me.


Note You need to log in before you can comment on or make changes to this bug.