Bug 1331481

Summary: Adding an additional Ovirt Node to datacenter containing local POSIX mounts fails
Product: [oVirt] ovirt-engine Reporter: Bryan <bhughes>
Component: BLL.StorageAssignee: Allon Mureinik <amureini>
Status: CLOSED DUPLICATE QA Contact: Aharon Canan <acanan>
Severity: medium Docs Contact:
Priority: unspecified    
Version: ---CC: amureini, bhughes, bugs, rgolan, sbonazzo
Target Milestone: ---Flags: rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-01 19:09:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
VDSM log
none
Hosted-engine-setup log
none
host deploy log none

Description Bryan 2016-04-28 15:23:37 UTC
Description of problem:  I have created an ovirt hosted engine deployment environment on Node1.  I then attached local filesystem mounts to the Datacenter with the Posix Compliant FS storage domain.  When I try to attach another server to the datacenter it can't because it can't attach to the Storage pool.  The extra server is trying to attach to the local POSIX mount points which obviously it can't.  I would assume it would not try to attach to the storage domain but it does and when it fails it prevents me from activating the second host in the Ovirt Engine.

This is a HUGE blocker for our install and we can't proceed without this ability.

NOTE: I have to use ovirt-3.6.2 because of BUG 1320128 where adding additional hosts does not work in versions above 3.6.2 using the automated install
https://bugzilla.redhat.com/show_bug.cgi?id=1320128

This is also partially part of this bug: BUG 125706 and BUG 1134318 https://bugzilla.redhat.com/show_bug.cgi?id=1257506
https://bugzilla.redhat.com/show_bug.cgi?id=1134318


Version-Release number of selected component (if applicable):
ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
vdsm-4.17.23.2-0.el7.centos.noarch
vdsm-xmlrpc-4.17.23.2-0.el7.centos.noarch
vdsm-yajsonrpc-4.17.23.2-0.el7.centos.noarch
vdsm-cli-4.17.23.2-0.el7.centos.noarch
vdsm-infra-4.17.23.2-0.el7.centos.noarch
vdsm-python-4.17.23.2-0.el7.centos.noarch
vdsm-gluster-4.17.23.2-0.el7.centos.noarch
vdsm-jsonrpc-4.17.23.2-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.17.23.2-0.el7.centos.noarch

How reproducible:
100%

Steps to Reproduce:
1. Stand up Ovirt Hosted engine on a node1
2. Create a Storage domain of type POSIX Compliant FS on Node1
3. Attempt to attach Node2 to Datacenter and you will get an error that it can't connect to the storage domains.

Actual results:
Server attaches to datacenter but won't activate

Expected results:
Node2 is added to the cluster and ignores the Storage domains that are local to Node1

Additional info:

Comment 1 Bryan 2016-04-28 15:27:58 UTC
Created attachment 1151942 [details]
VDSM log

Comment 2 Bryan 2016-04-28 15:28:27 UTC
Created attachment 1151944 [details]
Hosted-engine-setup log

Comment 3 Bryan 2016-04-28 15:28:53 UTC
Created attachment 1151945 [details]
host deploy log

Comment 4 Bryan 2016-04-28 15:32:38 UTC
For reference:

My storage domains exist like this after the first host is deployed and before trying to add the second host:

Critial (Master domain)  -> Gluster
Noncritical              -> Gluster
ISODatastore             -> Gluster
REPO                     -> Gluster
HDFS01                   -> Posix
HDFS02                   -> Posix
HDFS03                   -> Posix

HDFS01 is a local mount on first host:  it is mounted at /hdfs01 on host1
HDFS02 is a local mount on first host:  it is mounted at /hdfs02 on host1
HDFS03 is a local mount on first host:  it is mounted at /hdfs03 on host1

I don't want host2 to use or access these three storage domains, it has no need to for our deployment.

So, host2 will have a storage domain to connect to (all the gluster ones) when it joins the datacenter

Comment 5 Sandro Bonazzola 2016-04-29 11:44:42 UTC
Moving to ovirt-engine since the logic allowing or disallowing to add an host is there.

Hosted Engine hosts need shared storage in order to run HE VMs, while a local storage is needed here. So the request here is to allow local storage domains to be attached to hosts with shared storage domains already attached.

Comment 6 Allon Mureinik 2016-05-01 19:09:53 UTC
On the roadmap, when the storage domains will be connected to clusters, not DCs, but this won't happen any time soon unfortunately.

*** This bug has been marked as a duplicate of bug 1134318 ***