Additional hosts are detected as first host due to a VDSM reporting type 'GLUSTER' instead of 7. VDSM is answering : {'status': {'message': 'OK', 'code': 0}, 'info': {'uuid': 'c54f691c-fc83-4a74-9c02-a68ef87e68b3', 'version': '3', 'role': 'Master', 'remotePath': 'h4.imatronix.com:engine', 'type': 'GLUSTERFS', 'class': 'Data', 'pool': ['61e2d6d0-f798-428d-a07e-397fc7a4f10f'], 'name': 'hosted_storage'}} to getStorageDomainInfo call. so a fix to match this API is needed. Discussion to get a clear API documentation for getStorageDomainInfo started on ovirt devel mailing list.
Allon, can you provide API documentation on the 'type' key of the getStorageDomainInfo call? Is it supposed to be str (GLUSTERFS) or int (7) ?
Nir?
AFAIK it uses strings ('GLUSTERFS'). Nir?
(In reply to Allon Mureinik from comment #3) > AFAIK it uses strings ('GLUSTERFS'). > Nir? and is it the same for NFS, iSCSI and FC ?
(In reply to Sandro Bonazzola from comment #1) > Allon, can you provide API documentation on the 'type' key of the > getStorageDomainInfo call? > Is it supposed to be str (GLUSTERFS) or int (7) ? It should be the string "GLUSTERFS" But the type key is not documented, it should have been "domainType" according to the schema (see bug 1214346).
*** Bug 1215663 has been marked as a duplicate of this bug. ***
Hi Fabian, Can we get some steps for reproduction of this bug please?
(In reply to Nikolai Sednev from comment #7) > Hi Fabian, > Can we get some steps for reproduction of this bug please? Looks like either you wrote wrong name or wrong needinfo address. Step to reproduce: 1) deploy first HE host using external GlusterFS replica3 storage 2) deploy second HE host - Success if hosted-engine --deploy ask about second host being an additional host. - Fail if it doesn't ask and just say deploying on first host.
An additional hosted-engine host deployment over Gluster works as expected. Tested using ovirt-hosted-engine-setup-1.3.0-1.el7ev.noarch
oVirt 3.6.0 has been released on November 4th, 2015 and should fix this issue. If problems still persist, please open a new BZ and reference this one.