Bug 1301404 - Rabbitmq fail to start on IPv6 overcloud deployment
Rabbitmq fail to start on IPv6 overcloud deployment
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
All Linux
urgent Severity urgent
: y3
: 7.0 (Kilo)
Assigned To: Hugh Brock
Asaf Hirshberg
: TestOnly
: 1301403 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-24 15:46 EST by Udi Shkalim
Modified: 2016-02-18 11:51 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
An SELinux issue stopped RabbitMQ from starting on IPv6-based Overclouds. This fix corrects the SELinux issue and RabbitMQ now starts successfully.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-02-18 11:51:45 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Udi Shkalim 2016-01-24 15:46:22 EST
Description of problem:
Rabbitmq resource fails to start on overcloud with IPv6 deployment

BOOT FAILED
===========

Error description:
   {could_not_start,rabbit,
       {bad_return,
           {{rabbit,start,[normal,[]]},
            {'EXIT',
                {rabbit,failure_during_boot,
                    {error,
                        {timeout_waiting_for_tables,
                            [rabbit_user,rabbit_user_permission,rabbit_vhost,
                             rabbit_durable_route,rabbit_durable_exchange,
                             rabbit_runtime_parameters,
                             rabbit_durable_queue]}}}}}}}

Log files (may contain more information):
   /var/log/rabbitmq/rabbit@overcloud-controller-0.log
   /var/log/rabbitmq/rabbit@overcloud-controller-0-sasl.log

{"init terminating in do_boot",{rabbit,failure_during_boot,{could_not_start,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot,{error,{timeout_waiting_for_tables,[rabbit_user,rabbit_user_permission,rabbit_vhost,rabbit_durable_route,rabbit_durable_exchange,rabbit_runtime_parameters,rabbit_durable_queue]}}}}}}}}}



Version-Release number of selected component (if applicable):
Overcloud Packages:
rabbitmq-server-3.3.5-15.el7ost.noarch

Undercloud packages
openstack-tripleo-heat-templates-0.8.6-112.el7ost.noarch
openstack-puppet-modules-2015.1.8-45.el7ost.noarch

How reproducible:
1/1

Steps to Reproduce:
1. Follow the guide to deploy virt env with IPv6 - http://openstack.etherpad.corp.redhat.com/372
2. Overcloud deployed sucessfully
3. Connect to one of the controller and run pcs status
4. rabbitmq resource failed to start

Actual results:
rabbit resource fail to start

Expected results:
rabbit cluster should start with no errors

Additional info:
Comment 2 Mike Burns 2016-01-24 20:38:18 EST
*** Bug 1301403 has been marked as a duplicate of this bug. ***
Comment 3 Udi Shkalim 2016-01-31 05:19:19 EST
A problem with SELinux caused the rabbitmq failure.
Comment 4 Asaf Hirshberg 2016-01-31 05:27:26 EST
Verified on 2016-01-21-1.

[root@overcloud-controller-0 ~]# rabbitmqctl cluster_status
Cluster status of node 'rabbit@overcloud-controller-0' ...
[{nodes,[{disc,['rabbit@overcloud-controller-0',
                'rabbit@overcloud-controller-1',
                'rabbit@overcloud-controller-2']}]},
 {running_nodes,['rabbit@overcloud-controller-1',
                 'rabbit@overcloud-controller-2',
                 'rabbit@overcloud-controller-0']},
 {cluster_name,<<"rabbit@overcloud-controller-0">>},
 {partitions,[]}]
...done.
[root@overcloud-controller-0 ~]# rabbitmqctl status
Status of node 'rabbit@overcloud-controller-0' ...
[{pid,24287},
 {running_applications,[{rabbit,"RabbitMQ","3.3.5"},
                        {mnesia,"MNESIA  CXC 138 12","4.11"},
                        {os_mon,"CPO  CXC 138 46","2.2.14"},
                        {xmerl,"XML parser","1.3.6"},
                        {sasl,"SASL  CXC 138 11","2.3.4"},
                        {stdlib,"ERTS  CXC 138 10","1.19.4"},
                        {kernel,"ERTS  CXC 138 10","2.16.4"}]},
 {os,{unix,linux}},
 {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:12:12] [async-threads:30] [hipe] [kernel-poll:true]\n"},
 {memory,[{total,390255952},
          {connection_procs,14395176},
          {queue_procs,10055240},
          {plugins,0},
          {other_proc,14292096},
          {mnesia,1876952},
          {mgmt_db,0},
          {msg_index,313488},
          {other_ets,1314888},
          {binary,320468112},
          {code,16693830},
          {atom,2255161},
          {other_system,8591009}]},
 {alarms,[]},
 {listeners,[{clustering,35672,"::"},{amqp,5672,"2620:52:0:23AE::13"}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,13423247360},
 {disk_free_limit,50000000},
 {disk_free,469386760192},
 {file_descriptors,[{total_limit,3996},
                    {total_used,286},
                    {sockets_limit,3594},
                    {sockets_used,284}]},
 {processes,[{limit,1048576},{used,4383}]},
 {run_queue,0},
 {uptime,422323}]
...done.
Comment 7 errata-xmlrpc 2016-02-18 11:51:45 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0264.html

Note You need to log in before you can comment on or make changes to this bug.