Bug 1301046

Summary: IPv4 VIP resources get created with 64 cidr_netmask in mixed IPv4/IPv6 environment
Product: Red Hat OpenStack Reporter: Giulio Fidente <gfidente>
Component: openstack-puppet-modulesAssignee: Giulio Fidente <gfidente>
Status: CLOSED ERRATA QA Contact: Omri Hochman <ohochman>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 7.0 (Kilo)CC: dnavale, gfidente, hbrock, lbezdick, mburns, mcornea, rhel-osp-director-maint, yeylon
Target Milestone: z4Keywords: ZStream
Target Release: 7.0 (Kilo)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-puppet-modules-2015.1.8-44.el7ost Doc Type: Bug Fix
Doc Text:
Previously, in mixed environments where some networks use IPv4 addresses and others use IPv6 addresses, the IPv6 CIDR was incorrectly used for the IPv4 virtual IP addresses too. As a result, the deployment of the overcloud failed with Pacemaker refusing to start the IPv4 virtual IP addresses. With this update, a new functionality which identifies the type of virtual IP (whether IPv4 or IPv6) is added, adapting the CIDR accordingly. As a result, each virtual IP address is configured with the appropriate CIDR, of either the IPv4 or IPv6 class.
Story Points: ---
Clone Of: 1301015 Environment:
Last Closed: 2016-02-18 16:44:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1301015    

Description Giulio Fidente 2016-01-22 12:35:05 UTC
+++ This bug was initially created as a clone of Bug #1301015 +++

Description of problem:
IPv4 public VIP resources get created with 64 cidr_netmask in mixed IPv4/IPv6 environment

Version-Release number of selected component (if applicable):
openstack-tripleo-heat-templates-0.8.6-110.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Deploy mixed IPv4/IPv6 environment (storage and storage management network on ipv4)

Actual results:
The IPV4 VIP resource are unable to start:
root@overcloud-controller-0 ~]# pcs status | grep Stopped
 ip-172.16.3.4	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.16.1.4	(ocf::heartbeat:IPaddr2):	Stopped


Expected results:
The resources get started.

Additional info:
overcloud-controller-0-ip-172.16.3.4_start_0:6 [ ocf-exit-reason:Invalid netmask specification [64].\n ]

[root@overcloud-controller-0 ~]# pcs resource show  ip-172.16.3.4
 Resource: ip-172.16.3.4 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=172.16.3.4 cidr_netmask=64 
  Operations: start interval=0s timeout=20s (ip-172.16.3.4-start-interval-0s)
              stop interval=0s timeout=20s (ip-172.16.3.4-stop-interval-0s)
              monitor interval=10s timeout=20s (ip-172.16.3.4-monitor-interval-10s)

Comment 2 Marius Cornea 2016-01-25 11:39:49 UTC
[root@overcloud-controller-0 ~]# rpm -qa | grep puppet-modules
openstack-puppet-modules-2015.1.8-45.el7ost.noarch
[root@overcloud-controller-0 ~]# pcs resource show ip-2001.db8.fd00.1000..10
 Resource: ip-2001.db8.fd00.1000..10 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=2001:db8:fd00:1000::10 cidr_netmask=64 
  Operations: start interval=0s timeout=20s (ip-2001.db8.fd00.1000..10-start-interval-0s)
              stop interval=0s timeout=20s (ip-2001.db8.fd00.1000..10-stop-interval-0s)
              monitor interval=10s timeout=20s (ip-2001.db8.fd00.1000..10-monitor-interval-10s)
[root@overcloud-controller-0 ~]# pcs resource show ip-172.16.3.4
 Resource: ip-172.16.3.4 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=172.16.3.4 cidr_netmask=32 
  Operations: start interval=0s timeout=20s (ip-172.16.3.4-start-interval-0s)
              stop interval=0s timeout=20s (ip-172.16.3.4-stop-interval-0s)
              monitor interval=10s timeout=20s (ip-172.16.3.4-monitor-interval-10s)
[root@overcloud-controller-0 ~]#

Comment 5 errata-xmlrpc 2016-02-18 16:44:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0265.html