RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1337598 - Oracle Grid fails to install due to network connection issues on RHEL 7.3
Summary: Oracle Grid fails to install due to network connection issues on RHEL 7.3
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: iputils
Version: 7.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Jan Synacek
QA Contact: Robin Hack
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-19 14:59 UTC by Daniel Yeisley
Modified: 2020-06-11 12:52 UTC (History)
2 users (show)

Fixed In Version: iputils-20160308-4.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-04 00:12:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
minor changes to -I argument (672 bytes, patch)
2016-05-19 20:42 UTC, Daniel Yeisley
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:2185 0 normal SHIPPED_LIVE iputils bug fix update 2016-11-03 13:18:16 UTC

Description Daniel Yeisley 2016-05-19 14:59:29 UTC
Description of problem:
I've been unable to install Oracle Grid on RHEL 7.3.  Using the Oracle Cluster Verification Utility I was able to determine that the issue is coming from iputils. 

Version-Release number of selected component (if applicable):
iputils-20160308-3.el7

How reproducible:


Steps to Reproduce:
1. Configure the two systems for Oracle Grid installation.  
2. As user 'oracle' run the Cluster Verification Utility.

Actual results:
RHEL-7.2 + iputils-20160308-3

[oracle@veritas4 ~]$ ./grid/runcluvfy.sh comp nodecon -n veritas4,veritas5 -verbose 

Verifying node connectivity 

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  veritas5                              passed                  
  veritas4                              passed                  

Verification of the hosts config file successful


Interface information for node "veritas5"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eno1   10.16.64.48     10.16.64.0      0.0.0.0         10.16.71.254    00:1D:09:29:72:A2 1500  
 eno2   192.168.130.11  192.168.130.0   0.0.0.0         10.16.71.254    00:1D:09:29:72:A4 1500  
 enp10s0f0 192.168.100.105 192.168.100.0   0.0.0.0         10.16.71.254    90:E2:BA:75:DD:3C 1500  


Interface information for node "veritas4"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eno1   10.16.64.166    10.16.64.0      0.0.0.0         10.16.71.254    00:22:19:99:EF:EC 1500  
 eno2   192.168.130.10  192.168.130.0   0.0.0.0         10.16.71.254    00:22:19:99:EF:EE 1500  
 enp8s0f0 192.168.100.103 192.168.100.0   0.0.0.0         10.16.71.254    90:E2:BA:75:E4:00 1500  


Check: Node connectivity of subnet "10.16.64.0"

WARNING: 
Make sure IP address "eno1 : 10.16.64.48 [10.16.64.0] " is up and is a valid IP address on node "veritas5"

WARNING: 
Make sure IP address "eno1 : 10.16.64.166 [10.16.64.0] " is up and is a valid IP address on node "veritas4"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas5[10.16.64.48]           veritas4[10.16.64.166]          no              

ERROR: 
PRVF-7616 : Node connectivity failed for subnet "10.16.64.0" between "veritas5 - eno1 : 10.16.64.48" and "veritas4 - eno1 : 10.16.64.166"
Result: Node connectivity failed for subnet "10.16.64.0"


Check: TCP connectivity of subnet "10.16.64.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas4:10.16.64.166           veritas5:10.16.64.48            passed          
Result: TCP connectivity check passed for subnet "10.16.64.0"


Check: Node connectivity of subnet "192.168.130.0"

WARNING: 
Make sure IP address "eno2 : 192.168.130.11 [192.168.130.0] " is up and is a valid IP address on node "veritas5"

WARNING: 
Make sure IP address "eno2 : 192.168.130.10 [192.168.130.0] " is up and is a valid IP address on node "veritas4"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas5[192.168.130.11]        veritas4[192.168.130.10]        no              

ERROR: 
PRVF-7616 : Node connectivity failed for subnet "192.168.130.0" between "veritas5 - eno2 : 192.168.130.11" and "veritas4 - eno2 : 192.168.130.10"
Result: Node connectivity failed for subnet "192.168.130.0"


Check: TCP connectivity of subnet "192.168.130.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas4:192.168.130.10         veritas5:192.168.130.11         passed          
Result: TCP connectivity check passed for subnet "192.168.130.0"


Check: Node connectivity of subnet "192.168.100.0"

WARNING: 
Make sure IP address "enp10s0f0 : 192.168.100.105 [192.168.100.0] " is up and is a valid IP address on node "veritas5"

WARNING: 
Make sure IP address "enp8s0f0 : 192.168.100.103 [192.168.100.0] " is up and is a valid IP address on node "veritas4"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas5[192.168.100.105]       veritas4[192.168.100.103]       no              

ERROR: 
PRVF-7616 : Node connectivity failed for subnet "192.168.100.0" between "veritas5 - enp10s0f0 : 192.168.100.105" and "veritas4 - enp8s0f0 : 192.168.100.103"
Result: Node connectivity failed for subnet "192.168.100.0"


Check: TCP connectivity of subnet "192.168.100.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas4:192.168.100.103        veritas5:192.168.100.105        passed          
Result: TCP connectivity check passed for subnet "192.168.100.0"


WARNING: 
Could not find a suitable set of interfaces for VIPs

WARNING: 
Could not find a suitable set of interfaces for the private interconnect
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.16.64.0".
Subnet mask consistency check passed for subnet "192.168.130.0".
Subnet mask consistency check passed for subnet "192.168.100.0".
Subnet mask consistency check passed.

Result: Node connectivity check failed


Verification of node connectivity was unsuccessful. 
Checks did not pass for the following node(s):
	veritas5,veritas4



Expected results:
RHEL-7.2 with iputils-20121221-7
[oracle@veritas4 ~]$ ./grid/runcluvfy.sh comp nodecon -n veritas4,veritas5 -verbose 

Verifying node connectivity 

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  veritas5                              passed                  
  veritas4                              passed                  

Verification of the hosts config file successful


Interface information for node "veritas5"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eno1   10.16.64.48     10.16.64.0      0.0.0.0         10.16.71.254    00:1D:09:29:72:A2 1500  
 eno2   192.168.130.11  192.168.130.0   0.0.0.0         10.16.71.254    00:1D:09:29:72:A4 1500  
 enp10s0f0 192.168.100.105 192.168.100.0   0.0.0.0         10.16.71.254    90:E2:BA:75:DD:3C 1500  


Interface information for node "veritas4"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eno1   10.16.64.166    10.16.64.0      0.0.0.0         10.16.71.254    00:22:19:99:EF:EC 1500  
 eno2   192.168.130.10  192.168.130.0   0.0.0.0         10.16.71.254    00:22:19:99:EF:EE 1500  
 enp8s0f0 192.168.100.103 192.168.100.0   0.0.0.0         10.16.71.254    90:E2:BA:75:E4:00 1500  


Check: Node connectivity of subnet "10.16.64.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas5[10.16.64.48]           veritas4[10.16.64.166]          yes             
Result: Node connectivity passed for subnet "10.16.64.0" with node(s) veritas5,veritas4


Check: TCP connectivity of subnet "10.16.64.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas4:10.16.64.166           veritas5:10.16.64.48            passed          
Result: TCP connectivity check passed for subnet "10.16.64.0"


Check: Node connectivity of subnet "192.168.130.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas5[192.168.130.11]        veritas4[192.168.130.10]        yes             
Result: Node connectivity passed for subnet "192.168.130.0" with node(s) veritas5,veritas4


Check: TCP connectivity of subnet "192.168.130.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas4:192.168.130.10         veritas5:192.168.130.11         passed          
Result: TCP connectivity check passed for subnet "192.168.130.0"


Check: Node connectivity of subnet "192.168.100.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas5[192.168.100.105]       veritas4[192.168.100.103]       yes             
Result: Node connectivity passed for subnet "192.168.100.0" with node(s) veritas5,veritas4


Check: TCP connectivity of subnet "192.168.100.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  veritas4:192.168.100.103        veritas5:192.168.100.105        passed          
Result: TCP connectivity check passed for subnet "192.168.100.0"


Interfaces found on subnet "10.16.64.0" that are likely candidates for VIP are:
veritas5 eno1:10.16.64.48
veritas4 eno1:10.16.64.166

Interfaces found on subnet "192.168.130.0" that are likely candidates for a private interconnect are:
veritas5 eno2:192.168.130.11
veritas4 eno2:192.168.130.10
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.16.64.0".
Subnet mask consistency check passed for subnet "192.168.130.0".
Subnet mask consistency check passed for subnet "192.168.100.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed


Verification of node connectivity was successful. 


Additional info:

Comment 1 Daniel Yeisley 2016-05-19 15:50:10 UTC
It looks like the -I <ip> functionality has changed.  

[root@veritas5 ~]# rpm -Uvh --force iputils-20121221-7.el7.x86_64.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:iputils-20121221-7.el7           ################################# [ 50%]
Cleaning up / removing...
   2:iputils-20160308-3.el7           ################################# [100%]

[root@veritas5 ~]# ping 192.168.100.103 -c 1 -w 3 -I 192.168.100.105
PING 192.168.100.103 (192.168.100.103) from 192.168.100.105 : 56(84) bytes of data.
64 bytes from 192.168.100.103: icmp_seq=1 ttl=64 time=0.079 ms

--- 192.168.100.103 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms

[root@veritas5 ~]# rpm -Uvh iputils-20160308-3.el7.x86_64.rpm 
Preparing...                          ################################# [100%]
Updating / installing...
   1:iputils-20160308-3.el7           ################################# [ 50%]
Cleaning up / removing...
   2:iputils-20121221-7.el7           ################################# [100%]

[root@veritas5 ~]# ping 192.168.100.103 -c 1 -w 3 -I 192.168.100.105
ping: unknown iface 192.168.100.105
[root@veritas5 ~]#

Comment 3 Daniel Yeisley 2016-05-19 20:42:27 UTC
Created attachment 1159679 [details]
minor changes to -I argument

This patch works for me, but someone with more knowledge of this code should look at it.

[root@hp-dl380pgen8-03 iputils-s20160308]# ./ping 10.16.42.143 -I 10.16.42.143
PING 10.16.42.143 (10.16.42.143) from 10.16.42.143 : 56(84) bytes of data.
64 bytes from 10.16.42.143: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 10.16.42.143: icmp_seq=2 ttl=64 time=0.011 ms
64 bytes from 10.16.42.143: icmp_seq=3 ttl=64 time=0.010 ms
64 bytes from 10.16.42.143: icmp_seq=4 ttl=64 time=0.010 ms
^C
--- 10.16.42.143 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.010/0.012/0.019/0.005 ms
[root@hp-dl380pgen8-03 iputils-s20160308]# ./ping 10.16.42.143 -I eno1
PING 10.16.42.143 (10.16.42.143) from 10.16.42.143 eno1: 56(84) bytes of data.
64 bytes from 10.16.42.143: icmp_seq=1 ttl=64 time=0.010 ms
64 bytes from 10.16.42.143: icmp_seq=2 ttl=64 time=0.010 ms
64 bytes from 10.16.42.143: icmp_seq=3 ttl=64 time=0.010 ms
^C

[root@hp-dl380pgen8-03 iputils-s20160308]# ./ping 10.16.42.143
PING 10.16.42.143 (10.16.42.143) 56(84) bytes of data.
64 bytes from 10.16.42.143: icmp_seq=1 ttl=64 time=0.009 ms
64 bytes from 10.16.42.143: icmp_seq=2 ttl=64 time=0.010 ms
64 bytes from 10.16.42.143: icmp_seq=3 ttl=64 time=0.011 ms
^C

Comment 12 errata-xmlrpc 2016-11-04 00:12:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-2185.html


Note You need to log in before you can comment on or make changes to this bug.