Description of problem: A 3 node hosted engine ha cluster with the engine storage on glusterfs cannot have additional non-ha hosts added. They fail due to attempting to mount the glusterfs hosted engine storage as NFS but with the gluster options which subsequently fails. Version-Release number of selected component (if applicable): 3.6.3.4 How reproducible: Very Steps to Reproduce: 1. Add a new (non-ha) host to the Datacentre that hosts the gluster hosted storage 2. 3. Actual results: Host fails due to being unable to mount the hosted engine storage Expected results: Active host Additional info: jsonrpc.Executor/4::INFO::2016-03-09 16:05:02,002::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID=u'00000001-0001-0001-0001-000000000229', conList=[{u'id': u'19fb9b3b-79c1-48e8-9300-d0d52ddce7b1', u'connection': u'ovirt36-h1:/hosted-engine', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'password': '********', u'port': u''}], options=None) jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine mode: None jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::storageServer::357::Storage.StorageServer.MountConnection::(_get_backup_servers_option) Using bricks: ['ovirt36-h1', 'ovirt36-h2', 'ovirt36-h3'] jsonrpc.Executor/6::DEBUG::2016-03-09 15:10:01,022::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-11 /usr/bin/sudo -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -o backup-volfile-servers=ovirt36-h2:ovirt36-h3 ovirt36-h1:/hosted-engine /rhev/data-center/mnt/glusterSD/ovirt36-h1:_hosted-engine (cwd None) jsonrpc.Executor/6::ERROR::2016-03-09 15:10:01,042::hsm::2473::Storage.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 236, in connect six.reraise(t, v, tb) File "/usr/share/vdsm/storage/storageServer.py", line 228, in connect self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP) File "/usr/share/vdsm/storage/mount.py", line 225, in mount return self._runcmd(cmd, timeout) File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd raise MountError(rc, ";".join((out, err))) MountError: (32, ';Running scope as unit run-18808.scope.\nmount.nfs: an incorrect mount option was specified\n') The '-t glusterfs' option is not being passed and therefore an nfs mount fails due to incorrect options.
A suggested workaround was successful Set the vfs_type field to glusterfs in the storage_server_connections table in the engine database. (My engine gluster storage was called 'hosted-engine') update storage_server_connections set vfs_type = 'glusterfs' where connection = 'hosted-engine' Hosts can be added from there on.
Roy, any reason the mount options are not sent correctly?
Darryl, can you please explain how did you install HE with GlusterFS? Current installer will not create the bricks and volume for you.
I created the gluster volume manually. 3 hosts each with 2 disks, root filesystem mirrored on each with the remainder of the disk xfs with a 3way replica glusterfs across the 6 disks as bricks. I used the recommended gluster settings for oVirt and set up the hosted engine as per the doco. The 3 hosts are the HE hosts for the engine. The glusterfs has the storage for the engine, reports engine VM and a Cinder VM for the ceph main storage. I know hyper-converged is not yet 'production ready' but this configuration seems to work well.
HE hosts are connected by the hosted engine agents so they know its -t gluster. non HE hosts are coneccted using the engine - but the domain in engine is missing that vfs type. The root cause is probably the auto-import which is not adding the vfs type if its gluster or the maybe the AddExistingStorageDomainCommand which is ignoreing the type.
*** Bug 1324075 has been marked as a duplicate of this bug. ***
A draft on master means it'll miss 3.6.6. Moving to 3.6.7. Let me know if it's not OK.
Verified on rhevm-3.6.7.2-0.1.el6.noarch
*** Bug 1351203 has been marked as a duplicate of this bug. ***
Simone, take a look: https://access.redhat.com/solutions/2423321