Bug 1158833 - Self-Hosted Engine not found on NFS4 Storage Domain on second host setup with export path "/"
Summary: Self-Hosted Engine not found on NFS4 Storage Domain on second host setup with...
Keywords:
Status: CLOSED DUPLICATE of bug 1215967
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.4.3-1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ovirt-3.6.3
: 3.6.0
Assignee: Idan Shaby
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-30 10:35 UTC by Andrea Perotti
Modified: 2019-04-24 14:04 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-06-14 14:38:11 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Hosted Engine Setup log on first node (396.73 KB, text/plain)
2014-10-30 10:35 UTC, Andrea Perotti
no flags Details
Hosted Engine Setup log on second node (235.27 KB, text/plain)
2014-10-30 10:36 UTC, Andrea Perotti
no flags Details

Description Andrea Perotti 2014-10-30 10:35:24 UTC
Created attachment 952052 [details]
Hosted Engine Setup log on first node

Description of problem:
On rhel 6.6, after having installed on node1 hosted-engine and having complete the installation of the RHEVM VM on an NFS Share, when we repeat on node2 the command: 
 hosted-engine --deploy 
and insert the NFS storage domain info, the already installed manager is not recognized and I am only offered to do a new manager installation: 

 Please specify the storage you would like to use (nfs3, nfs4)[nfs3]: nfs4
 Please specify the full shared storage connection path to use (example: host:/path): virt1.gcio.unicredit.eu:/ 
 [ INFO ] Installing on first host Please provide storage domain name. [hosted_storage]: 

Version-Release number of selected component (if applicable):
RHEL 6.6
RHEV 3.4
 ovirt-host-deploy-1.2.3-1.el6ev.noarch
 ovirt-hosted-engine-ha-1.1.5-1.el6ev.noarch
 ovirt-hosted-engine-setup-1.1.5-1.el6ev.noarch 
 
How reproducible:

* OS Setup
install rhel 6.6 on 2 nodes
install ovirt-hosted-engine-setup from rhev 3.4 on both

* Create an NFS4 export
# mkdir -p /nfs/rhev-manager
# chown 36:36 /nfs/rhev-manager
Explicitly set Domain in /etc/idmapd.conf

export it as:
/nfs/rhev-manager *(rw,no_root_squash,fsid=0,sync)

restart NFS

* test mount from both nodes:

# mount -t nfs4 nfshost:/ /mnt

result: both nodes ok

* RHEV Setup
on node1
# hosted-engine --depoly
pass the nfs4 share above.
Install the rhevm on the VM created by the setup. Complete the procedure without any problem.
results:
- ssh on RHEVM is ok
- admin console ok
- status hosts ok

on node2
# hosted-engine --depoly
...
Please specify the storage you would like to use (nfs3, nfs4)[nfs3]: nfs4 
Please specify the full shared storage connection path to use (example: host:/path): nfshost:/ 
[ INFO ] Installing on first host      <-- NO!
Please provide storage domain name. [hosted_storage]: 

Actual results:
The RHEVM installed on NFS4 is not detected.

Expected results:
hosted-engine should detect the already installed RHEVM VM on the NFS4 share.

Comment 1 Andrea Perotti 2014-10-30 10:36:49 UTC
Created attachment 952053 [details]
Hosted Engine Setup log on second node

Comment 4 Simone Tiraboschi 2014-11-03 10:19:59 UTC
Ok, found it: for some reason it's removing the trailing '/' from remotePath attribute and so than it doesn't match all the storage pool attributes. So it thinks that is a new install and so it assumes that it's the first host.

From the log of the second host:
2014-10-27 18:07:39 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._getStorageDomainInfo:432 {'status': {'message': 'OK', 'code': 0}, 'info': {'uuid': '7ec72ae9-44bd-408e-93f4-1fab1c881f69', 'version': '3', 'role': 'Master', 'remotePath': 'virt1.gcio.unicredit.eu:', 'type': 'NFS', 'class': 'Data', 'pool': ['4e9971cf-0c2b-4bd8-adb1-54a415fa8bc4'], 'name': 'hosted_storage'}}

I'm investigating it.

Comment 5 Simone Tiraboschi 2014-11-03 15:57:42 UTC
It's fully reproducible also on oVirt 3.5.0 with vdsm 4.16.7-1.

The problem is that VDSM is removing the trailing '/' from NFS remotePath attribute also when the user is simply trying to connect to the NFS global root of NFSv4 resulting in an invalid path identifier.

[root@thec653 ~]# vdsClient -s 0 getStorageDomainsList 
21fcaaec-ed51-4a17-adcb-eb6b5a087eba

[root@thec653 ~]# vdsClient -s 0 getStorageDomainInfo 21fcaaec-ed51-4a17-adcb-eb6b5a087eba
        uuid = 21fcaaec-ed51-4a17-adcb-eb6b5a087eba
        version = 3
        role = Master
        remotePath = 192.168.1.115:
        type = NFS
        class = Data
        pool = ['73da157b-322e-4b00-b4cf-ee3aeac0962d']
        name = hosted_storage

Comment 6 Allon Mureinik 2014-11-07 08:54:18 UTC
As a temporary workaround, have you tried to give use "nfshost://" ?

Comment 9 Andrea Perotti 2014-12-22 15:58:32 UTC
Unfortunately the customer has moved the rhevm from a NFS4 to an NFS3 storage for the hosted engine, so I'm not able anymore to do any kind of test of this side.

Comment 10 Yaniv Lavi 2015-01-25 14:55:44 UTC
For now as a workaround please do not use mount that is on "/".
We will try to res love this for 3.6.

Comment 11 Allon Mureinik 2015-06-14 14:38:11 UTC
Reproduced internally by QA. We will continue tracking on bug 1215967.

*** This bug has been marked as a duplicate of bug 1215967 ***


Note You need to log in before you can comment on or make changes to this bug.