Bug 1365873 - RabbitMQ resources fail to start in HA IPv6 deployment
Summary: RabbitMQ resources fail to start in HA IPv6 deployment
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rabbitmq-server
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: beta
: 10.0 (Newton)
Assignee: Peter Lemenkov
QA Contact: Asaf Hirshberg
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-10 11:49 UTC by Marius Cornea
Modified: 2016-12-14 15:50 UTC (History)
9 users (show)

Fixed In Version: rabbitmq-server-3.6.3-5.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-14 15:50:51 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:2948 normal SHIPPED_LIVE Red Hat OpenStack Platform 10 enhancement update 2016-12-14 19:55:27 UTC

Internal Links: 1549089

Description Marius Cornea 2016-08-10 11:49:18 UTC
Description of problem:
RabbitMQ resources fail to start in HA IPv6 deployment so the overcloud deployment fails. The issues is be the same as the one reported in BZ#1347802 which was fixed by rabbitmq-server-3.6.3-5.el7ost.noarch

The images provided by rhosp-director-images-10.0-20160803.1.el7ost.noarch contain rabbitmq-server-3.6.2-3.el7ost.noarch

Version-Release number of selected component (if applicable):
rhosp-director-images-10.0-20160803.1.el7ost.noarch

Comment 4 Asaf Hirshberg 2016-09-26 07:34:52 UTC
Verified, Deployment succedded using OpenStack-10.0-RHEL-7 Puddle: 2016-09-22.2

Deploy command:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation-v6.yaml \
-e /home/stack/network-environment-v6.yaml --control-scale 3 --compute-scale 2 --neutron-network-type vlan \
--neutron-tunnel-types vlan --neutron-network-vlan-ranges datacentre:204:215 --neutron-disable-tunneling \
--timeout 180 --ntp-server clock.redhat.com


[root@overcloud-controller-0 ~]# rabbitmqctl status
Status of node 'rabbit@overcloud-controller-0' ...
[{pid,8447},
 {running_applications,[{rabbit,"RabbitMQ","3.6.3"},
                        {mnesia,"MNESIA  CXC 138 12","4.13.4"},
                        {rabbit_common,[],"3.6.3"},
                        {os_mon,"CPO  CXC 138 46","2.4"},
                        {ranch,"Socket acceptor pool for TCP protocols.",
                               "1.2.1"},
                        {xmerl,"XML parser","1.3.10"},
                        {sasl,"SASL  CXC 138 11","2.7"},
                        {stdlib,"ERTS  CXC 138 10","2.8"},
                        {kernel,"ERTS  CXC 138 10","4.2"}]},
 {os,{unix,linux}},
 {erlang_version,"Erlang/OTP 18 [erts-7.3.1.2] [source] [64-bit] [smp:12:12] [async-threads:30] [hipe] [kernel-poll:true]\n"},
 {memory,[{total,259254144},
          {connection_readers,1834696},
          {connection_writers,438040},
          {connection_channels,2090056},
          {connection_other,4742888},
          {queue_procs,8156312},
          {queue_slave_procs,4457488},
          {plugins,0},
          {other_proc,21469072},
          {mnesia,1937680},
          {mgmt_db,0},
          {msg_index,313744},
          {other_ets,1431264},
          {binary,174451464},
          {code,19686999},
          {atom,752537},
          {other_system,17491904}]},
 {alarms,[]},
 {listeners,[{clustering,35672,"::"},{amqp,5672,"2620:52:0:23AE::13"}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,13423021260},
 {disk_free_limit,50000000},
 {disk_free,418020323328},
 {file_descriptors,[{total_limit,65436},
                    {total_used,160},
                    {sockets_limit,58890},
                    {sockets_used,158}]},
 {processes,[{limit,1048576},{used,3401}]},
 {run_queue,0},
 {uptime,4225},
 {kernel,{net_ticktime,60}}]
[root@overcloud-controller-0 ~]# rabbitmqctl cluster_status
Cluster status of node 'rabbit@overcloud-controller-0' ...
[{nodes,[{disc,['rabbit@overcloud-controller-0',
                'rabbit@overcloud-controller-1',
                'rabbit@overcloud-controller-2']}]},
 {running_nodes,['rabbit@overcloud-controller-1',
                 'rabbit@overcloud-controller-2',
                 'rabbit@overcloud-controller-0']},
 {cluster_name,<<"rabbit@overcloud-controller-2">>},
 {partitions,[]},
 {alarms,[{'rabbit@overcloud-controller-1',[nodedown]},
          {'rabbit@overcloud-controller-2',[nodedown]},
          {'rabbit@overcloud-controller-0',[]}]}]
[root@overcloud-controller-0 ~]# rpm -qa|grep rabbitmq
rabbitmq-server-3.6.3-5.el7ost.noarch
puppet-rabbitmq-5.5.0-0.20160902221816.837d556.el7ost.noarch

Comment 7 errata-xmlrpc 2016-12-14 15:50:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-2948.html


Note You need to log in before you can comment on or make changes to this bug.