Bug 1841076 - vdsm should use the switch '--inet6' for querying gluster volume info with '--remote-host'
Summary: vdsm should use the switch '--inet6' for querying gluster volume info with '-...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Gluster
Version: 4.40.16
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ovirt-4.4.1
: 4.40.21
Assignee: Kaustav Majumder
QA Contact: SATHEESARAN
URL:
Whiteboard:
: 1847092 (view as bug list)
Depends On:
Blocks: 1840971
TreeView+ depends on / blocked
 
Reported: 2020-05-28 10:06 UTC by SATHEESARAN
Modified: 2020-08-05 06:25 UTC (History)
6 users (show)

Fixed In Version: vdsm-4.40.22-1.el8ev.x86_64
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1840971
Environment:
Last Closed: 2020-08-05 06:25:05 UTC
oVirt Team: Gluster
Embargoed:
sasundar: ovirt-4.4?
sasundar: blocker?
sasundar: planning_ack?
sbonazzo: devel_ack+
sasundar: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 109360 0 master MERGED gluster: Added check and change for ipv6 hostnames in gluster vol list 2020-07-12 14:03:03 UTC
oVirt gerrit 110084 0 master MERGED gluster: Modified check for ipv4 fqdn instead of ipv6 2020-07-12 14:03:03 UTC

Description SATHEESARAN 2020-05-28 10:06:26 UTC
Description of problem:
------------------------
When attempting for RHHI-V deployment with static IPv6, gluster deployment is successful while the hosted-engine deployment fails

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
vdsm-gluster-4.40.16-1.el8ev.x86_64

How reproducible:
-----------------
Always

Steps to Reproduce:
------------------
1. Start RHHI-V deployment with static IPV6

Actual results:
---------------
HE deployment fails while creating the target storage domain

Expected results:
-----------------
HE deployment should be successful

Additional info:

--- Additional comment from SATHEESARAN on 2020-05-28 03:11:14 UTC ---

Error message in supervdsm.log

MainProcess|jsonrpc/6::ERROR::2020-05-28 02:38:55,667::supervdsm_server::97::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py", line 111, in _execGluster
    return commands.run(cmd)
  File "/usr/lib/python3.6/site-packages/vdsm/common/commands.py", line 101, in run
    raise cmdutils.Error(args, p.returncode, out, err)
vdsm.common.cmdutils.Error: Command ['/usr/sbin/gluster', '--mode=script', 'volume', 'info', '--remote-host=host1-storage.lab.eng.blr.redhat.com',
 'engine', '--xml'] failed with rc=1 out=b'Connection failed. Please check if gluster daemon is operational.\n' err=b''

This I think is because of gluster not supporting IPV6 for the command.

[root@ ]# gluster --remote-host=host1-storage.lab.eng.blr.redhat.com volume info engine
Connection failed. Please check if gluster daemon is operational.


[root@ ]# gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: 5f2bcbab-3ef8-4cf6-be8e-f8cafe345c92
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host1-storage.lab.eng.blr.redhat.com:/gluster_bricks/engine/engine
Brick2: host2-storage.lab.eng.blr.redhat.com:/gluster_bricks/engine/engine
Brick3: host3-storage.lab.eng.blr.redhat.com:/gluster_bricks/engine/engine
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: off
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36

so the '--remote-host' doesn't support IPv6 as I observe.

--- Additional comment from SATHEESARAN on 2020-05-28 05:02:54 UTC ---

This happens as the 'gluster --remote-host' doesn't tries to resolve the hostname in /etc/hosts
for IPV6, but it does for IPV4


[root@dhcp35-174 ~]# gluster --remote-host=2620:52:0:4622:5054:ff:fe28:6b23 volume list  <-- works for IPV6
testrep

[root@dhcp35-174 ~]# grep 2620:52:0:4622:5054:ff:fe28:6b23 /etc/hosts
2620:52:0:4622:5054:ff:fe28:6b23 myhost.lab.eng.blr.redhat.com myhost   <----- /etc/hosts/ entry for myhost.lab.eng.blr.redhat.com

[root@dhcp35-174 ~]# gluster --remote-host=myhost.lab.eng.blr.redhat.com volume list  <-- fails for IPV6 hostnames
Connection failed. Please check if gluster daemon is operational.


Works for all cases of IPV4

[root@dhcp35-174 ~]# gluster --remote-host=10.70.35.174 volume list
testrep
[root@dhcp35-174 ~]# echo 10.70.35.174 newhost.lab.eng.blr.redhat.com >> /etc/hosts
[root@dhcp35-174 ~]# gluster --remote-host=newhost.lab.eng.blr.redhat.com volume list
testrep

--- Additional comment from SATHEESARAN on 2020-05-28 09:51:45 UTC ---

Following is the comment from Sanju:
-------------------------------------
I see that the command is failing if we specify a hostname instead of IPv6:
[root@newhost ~]# gluster --remote-host=myhost.lab.eng.blr.redhat.com volume list
Connection failed. Please check if gluster daemon is operational.
[root@newhost ~]# 

when we add --inet6 option to the cli, it works:
[root@newhost ~]# gluster --remote-host=myhost.lab.eng.blr.redhat.com --inet6 volume list
testrep
[root@newhost ~]#

I believe, this option is added to differentiate whether the given hostname corresponds to IPv4 or IPv6 address.

@Sas, Do you think any changes are needed?

Thanks,
Sanju


-------------------------------

So vdsm-gluster code should try to use '--inet6' when the host uses IPV6 hostnames

Comment 1 Sandro Bonazzola 2020-05-28 12:43:24 UTC
4.4.0 has been released, moving to 4.4.1

Comment 2 Gobinda Das 2020-05-28 13:19:00 UTC
@Kaustav, can you please work on this?

Comment 3 SATHEESARAN 2020-05-29 06:47:54 UTC
For the initial testing the hostnames were resolved locally and now I have tested with DNS hostnames and
even now, I see that the deployment fails at the same step

Comment 4 Kaustav Majumder 2020-06-01 04:55:50 UTC
Added patch please review.

Comment 5 Sunil Kumar Acharya 2020-06-23 11:32:21 UTC
*** Bug 1847092 has been marked as a duplicate of this bug. ***

Comment 6 SATHEESARAN 2020-07-02 16:25:30 UTC
Tested with vdsm-4.40.21-1.el8ev.x86_64
On a IPv4 only setup, this check fails and adds '--inet6' to IPv4 hostnames as well, which leads to failure to mount gluster volumes

Comment 7 SATHEESARAN 2020-07-09 07:05:22 UTC
patch is merged and the vdsm build is available ( vdsm-4.40.22-1.el8ev.x86_64 )

Comment 8 SATHEESARAN 2020-07-12 14:04:11 UTC
Tested with vdsm-4.40.22-1.el8ev.x86_64

When using IPV6 FQDN, the additional token for gluster command '--inet6' is used when 
querying for volume information

Comment 9 Sandro Bonazzola 2020-08-05 06:25:05 UTC
This bugzilla is included in oVirt 4.4.1 release, published on July 8th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.1 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.