Bug 1225462 - [HC] Issue creating Replica 3 set on HC deploy
Summary: [HC] Issue creating Replica 3 set on HC deploy
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: General
Version: ---
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: bugs@ovirt.org
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks: 1217448
TreeView+ depends on / blocked
 
Reported: 2015-05-27 12:35 UTC by Sandro Bonazzola
Modified: 2022-05-06 08:40 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-04-18 12:35:02 UTC
oVirt Team: Gluster
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-45963 0 None None None 2022-05-06 08:40:41 UTC
oVirt gerrit 41332 0 master ABANDONED HC: provisioning replica 3 volume Never

Description Sandro Bonazzola 2015-05-27 12:35:24 UTC
pushed a patch for provisioning a replica 3 volume to be used by Hosted Engine in HC configuration: https://gerrit.ovirt.org/#/c/41332/

The deployment fails having:

# vdsClient -s 0 glusterHostsList
{'hosts': [{'hostname': '10.0.0.1/24',
              'status': 'CONNECTED',
              'uuid': 'bca960bb-1444-4984-949f-87dd6b106e47'},
             {'hostname': 'minidell.home',
              'status': 'CONNECTED',
              'uuid': '0fb5aa80-821e-4361-aca1-f22e8f8050e7'},
             {'hostname': '192.168.1.109',
              'status': 'CONNECTED',
              'uuid': '6a41d7be-a9c4-40fc-9ed5-ef2aedd52f0f'}],
   'status': {'code': 0, 'message': 'Done'}}

# vdsClient -s 0 glusterVolumesList
{'status': {'code': 0, 'message': 'Done'},
   'volumes': {'hosted_engine_glusterfs': {'brickCount': '3',
                                           'bricks':
                                           ['192.168.1.107:/home/test',
                                                      'minidell.home:/home/test',
                                                      '192.168.1.109:/home/test'],
                                           'bricksInfo': [{'hostUuid':
                                           'bca960bb-1444-4984-949f-87dd6b106e47',
                                                           'name':
                                                           '192.168.1.107:/home/test'},
                                                          {'hostUuid':
                                                          '0fb5aa80-821e-4361-aca1-f22e8f8050e7',
                                                           'name':
                                                           'minidell.home:/home/test'},
                                                          {'hostUuid':
                                                          '6a41d7be-a9c4-40fc-9ed5-ef2aedd52f0f',
                                                           'name':
                                                           '192.168.1.109:/home/test'}],
                                           'disperseCount': '0',
                                           'distCount': '3',
                                           'options': {'auth.allow': '*',
                                                       'cluster.eager-lock':
                                                       'enable',
                                                       'cluster.quorum-type':
                                                       'auto',
                                                       'cluster.server-quorum-type':
                                                       'server',
                                                       'network.ping-timeout':
                                                       '10',
                                                       'network.remote-dio':
                                                       'enable',
                                                       'nfs.disable':
                                                       'on',
                                                       'performance.io-cache':
                                                       'off',
                                                       'performance.quick-read':
                                                       'off',
                                                       'performance.read-ahead':
                                                       'off',
                                                       'performance.readdir-ahead':
                                                       'on',
                                                       'performance.stat-prefetch':
                                                       'off',
                                                       'server.allow-insecure':
                                                       'on',
                                                       'storage.owner-gid':
                                                       '36',
                                                       'storage.owner-uid':
                                                       '36',
                                                       'user.cifs':
                                                       'disable'},
                                           'redundancyCount': '0',
                                           'replicaCount': '3',
                                           'stripeCount': '1',
                                           'transportType': ['TCP'],
                                           'uuid':
                                           'fac24894-a863-436d-883a-4b615c966836',
                                           'volumeName':
                                           'hosted_engine_glusterfs',
                                           'volumeStatus': 'ONLINE',
                                           'volumeType': 'REPLICATE'}}}
Done

Issue here is that the host has 2 NICs and while it's asked to create the volume on the 192.168.1.107 NIC, VDSM automatically added to the pool the NIC on 10.0.0.1/24.

probing 192.168.1.107 is not useful since if you probe an additional interface for a host from itself, it will be ignored.

you also can't remove the 10.0.0.1/24 nic since it fails being that address "localhost".

Comment 1 Sandro Bonazzola 2015-06-30 06:51:49 UTC
Sahina, can you help with this issue?

Comment 2 Sahina Bose 2015-06-30 08:28:56 UTC
Is it possible during HE setup to ssh to the 2nd node, and peer probe the first host with "192.168.1.107" ?
Gluster needs to know that 192.168.1.107 is another known address for the host, and this is one way to do it.

Comment 3 Sandro Bonazzola 2015-06-30 09:05:37 UTC
(In reply to Sahina Bose from comment #2)
> Is it possible during HE setup to ssh to the 2nd node, and peer probe the
> first host with "192.168.1.107" ?
> Gluster needs to know that 192.168.1.107 is another known address for the
> host, and this is one way to do it.

Do you mean executing vdsm cli.glusterHostAdd(host) on the second and third host?
Issue there will be that nobody will have configured vdsm yet.
Or do you mean by calling gluster cli directly there?

Isn't there a way to tell glusterd running on localhost which interface must be used and let it propagate info to other members of the cluster?

Comment 4 Sandro Bonazzola 2015-07-10 14:40:37 UTC
Solving on Hosted Engine side. I'll open a RFE for gluster.

Comment 5 Sahina Bose 2015-07-14 05:52:31 UTC
(In reply to Sandro Bonazzola from comment #3)
> (In reply to Sahina Bose from comment #2)
> > Is it possible during HE setup to ssh to the 2nd node, and peer probe the
> > first host with "192.168.1.107" ?
> > Gluster needs to know that 192.168.1.107 is another known address for the
> > host, and this is one way to do it.
> 
> Do you mean executing vdsm cli.glusterHostAdd(host) on the second and third
> host?
> Issue there will be that nobody will have configured vdsm yet.
> Or do you mean by calling gluster cli directly there?

If vdsm is not installed on the other hosts, then this is an issue. The other way to do this would be to correct the brick interfaces after vdsm is installed.

So you would set up the gluster volume with the 10.0.. ip addresses, and once all hosts are installed with vdsm, the alternate ip address can be peer probed too. We then need to run replace-brick with the alternate ip address. Not an ideal way, though.

> 
> Isn't there a way to tell glusterd running on localhost which interface must
> be used and let it propagate info to other members of the cluster?

No

Comment 6 Sahina Bose 2016-04-12 11:53:25 UTC
Is this still required? I think we plan to provision the HC nodes via Ansible and gdeploy.

Comment 7 Sandro Bonazzola 2016-04-18 12:35:02 UTC
(In reply to Sahina Bose from comment #6)
> Is this still required? I think we plan to provision the HC nodes via
> Ansible and gdeploy.

Closing as per above comment


Note You need to log in before you can comment on or make changes to this bug.