pushed a patch for provisioning a replica 3 volume to be used by Hosted Engine in HC configuration: https://gerrit.ovirt.org/#/c/41332/ The deployment fails having: # vdsClient -s 0 glusterHostsList {'hosts': [{'hostname': '10.0.0.1/24', 'status': 'CONNECTED', 'uuid': 'bca960bb-1444-4984-949f-87dd6b106e47'}, {'hostname': 'minidell.home', 'status': 'CONNECTED', 'uuid': '0fb5aa80-821e-4361-aca1-f22e8f8050e7'}, {'hostname': '192.168.1.109', 'status': 'CONNECTED', 'uuid': '6a41d7be-a9c4-40fc-9ed5-ef2aedd52f0f'}], 'status': {'code': 0, 'message': 'Done'}} # vdsClient -s 0 glusterVolumesList {'status': {'code': 0, 'message': 'Done'}, 'volumes': {'hosted_engine_glusterfs': {'brickCount': '3', 'bricks': ['192.168.1.107:/home/test', 'minidell.home:/home/test', '192.168.1.109:/home/test'], 'bricksInfo': [{'hostUuid': 'bca960bb-1444-4984-949f-87dd6b106e47', 'name': '192.168.1.107:/home/test'}, {'hostUuid': '0fb5aa80-821e-4361-aca1-f22e8f8050e7', 'name': 'minidell.home:/home/test'}, {'hostUuid': '6a41d7be-a9c4-40fc-9ed5-ef2aedd52f0f', 'name': '192.168.1.109:/home/test'}], 'disperseCount': '0', 'distCount': '3', 'options': {'auth.allow': '*', 'cluster.eager-lock': 'enable', 'cluster.quorum-type': 'auto', 'cluster.server-quorum-type': 'server', 'network.ping-timeout': '10', 'network.remote-dio': 'enable', 'nfs.disable': 'on', 'performance.io-cache': 'off', 'performance.quick-read': 'off', 'performance.read-ahead': 'off', 'performance.readdir-ahead': 'on', 'performance.stat-prefetch': 'off', 'server.allow-insecure': 'on', 'storage.owner-gid': '36', 'storage.owner-uid': '36', 'user.cifs': 'disable'}, 'redundancyCount': '0', 'replicaCount': '3', 'stripeCount': '1', 'transportType': ['TCP'], 'uuid': 'fac24894-a863-436d-883a-4b615c966836', 'volumeName': 'hosted_engine_glusterfs', 'volumeStatus': 'ONLINE', 'volumeType': 'REPLICATE'}}} Done Issue here is that the host has 2 NICs and while it's asked to create the volume on the 192.168.1.107 NIC, VDSM automatically added to the pool the NIC on 10.0.0.1/24. probing 192.168.1.107 is not useful since if you probe an additional interface for a host from itself, it will be ignored. you also can't remove the 10.0.0.1/24 nic since it fails being that address "localhost".
Sahina, can you help with this issue?
Is it possible during HE setup to ssh to the 2nd node, and peer probe the first host with "192.168.1.107" ? Gluster needs to know that 192.168.1.107 is another known address for the host, and this is one way to do it.
(In reply to Sahina Bose from comment #2) > Is it possible during HE setup to ssh to the 2nd node, and peer probe the > first host with "192.168.1.107" ? > Gluster needs to know that 192.168.1.107 is another known address for the > host, and this is one way to do it. Do you mean executing vdsm cli.glusterHostAdd(host) on the second and third host? Issue there will be that nobody will have configured vdsm yet. Or do you mean by calling gluster cli directly there? Isn't there a way to tell glusterd running on localhost which interface must be used and let it propagate info to other members of the cluster?
Solving on Hosted Engine side. I'll open a RFE for gluster.
(In reply to Sandro Bonazzola from comment #3) > (In reply to Sahina Bose from comment #2) > > Is it possible during HE setup to ssh to the 2nd node, and peer probe the > > first host with "192.168.1.107" ? > > Gluster needs to know that 192.168.1.107 is another known address for the > > host, and this is one way to do it. > > Do you mean executing vdsm cli.glusterHostAdd(host) on the second and third > host? > Issue there will be that nobody will have configured vdsm yet. > Or do you mean by calling gluster cli directly there? If vdsm is not installed on the other hosts, then this is an issue. The other way to do this would be to correct the brick interfaces after vdsm is installed. So you would set up the gluster volume with the 10.0.. ip addresses, and once all hosts are installed with vdsm, the alternate ip address can be peer probed too. We then need to run replace-brick with the alternate ip address. Not an ideal way, though. > > Isn't there a way to tell glusterd running on localhost which interface must > be used and let it propagate info to other members of the cluster? No
Is this still required? I think we plan to provision the HC nodes via Ansible and gdeploy.
(In reply to Sahina Bose from comment #6) > Is this still required? I think we plan to provision the HC nodes via > Ansible and gdeploy. Closing as per above comment