Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1249847 - redhat big switch integration
redhat big switch integration
Status: CLOSED DUPLICATE of bug 1249846
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: Director
Assigned To: chris alfonso
yeylon@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-08-03 20:46 EDT by bigswitch
Modified: 2016-04-18 03:13 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-08-03 20:54:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description bigswitch 2015-08-03 20:46:32 EDT
Description of problem:

As part of the integration, we need to 1) delete neutron-dhcp-agent,
neutron-l3-agent, neutron-metadata-agent from openstack controller nodes, and bring
them up on multiple compute nodes. 2) change keystone configuration and restart keystone on all controller nodes.

However, once we do it, the pacemaker is in bad state and all keystone
authentication fails.

Following is the detail.
To disable neutron l3/dhcp/metadata agents, we use following pcs commands

    sudo pcs resource disable neutron-metadata-agent-clone
    sudo pcs resource disable neutron-metadata-agent
    sudo pcs resource cleanup neutron-metadata-agent-clone
    sudo pcs resource cleanup neutron-metadata-agent
    sudo pcs resource delete neutron-metadata-agent-clone
    sudo pcs resource delete neutron-metadata-agent

    sudo pcs resource disable neutron-dhcp-agent-clone
    sudo pcs resource disable neutron-dhcp-agent
    sudo pcs resource cleanup neutron-dhcp-agent-clone
    sudo pcs resource cleanup neutron-dhcp-agent
    sudo pcs resource delete neutron-dhcp-agent-clone
    sudo pcs resource delete neutron-dhcp-agent

    sudo pcs resource disable neutron-l3-agent-clone
    sudo pcs resource disable neutron-l3-agent
    sudo pcs resource cleanup neutron-l3-agent-clone
    sudo pcs resource cleanup neutron-l3-agent
    sudo pcs resource delete neutron-l3-agent-clone
    sudo pcs resource delete neutron-l3-agent

Then pacemaker is complaining following errors
[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
Cluster name: tripleo_cluster
Last updated: Fri Jul 31 13:34:46 2015
Last change: Fri Jul 31 12:54:18 2015
Stack: corosync
Current DC: overcloud-controller-2 (3) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
103 Resources configured


Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:

 ip-172.17.0.11	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-0 
 Clone Set: haproxy-clone [haproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-172.18.0.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-1 
 ip-192.168.2.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-2 
 ip-172.17.0.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-0 
 ip-192.168.1.90	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-1 
 Master/Slave Set: galera-master [galera]
     Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-172.19.0.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-2 
 Master/Slave Set: redis-master [redis]
     Masters: [ overcloud-controller-0 ]
     Slaves: [ overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: mongod-clone [mongod]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: memcached-clone [memcached]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-alarm-notifier-clone [openstack-ceilometer-alarm-notifier]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-clone [openstack-heat-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-keystone-clone [openstack-keystone]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: delay-clone [delay]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: httpd-clone [httpd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-alarm-evaluator-clone [openstack-ceilometer-alarm-evaluator]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 openstack-cinder-volume	(systemd:openstack-cinder-volume):	Started overcloud-controller-0 
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Failed actions:
    openstack-heat-api_monitor_60000 on overcloud-controller-0 'not running' (7): call=163, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:52:25 2015', queued=0ms, exec=5ms
    neutron-openvswitch-agent_monitor_60000 on overcloud-controller-0 'not running' (7): call=320, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:58:23 2015', queued=0ms, exec=0ms
    neutron-server_monitor_60000 on overcloud-controller-0 'not running' (7): call=314, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:58:18 2015', queued=0ms, exec=0ms
    neutron-openvswitch-agent_monitor_60000 on overcloud-controller-2 'not running' (7): call=296, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:57:49 2015', queued=0ms, exec=0ms
    openstack-heat-api_monitor_60000 on overcloud-controller-1 'not running' (7): call=162, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:52:25 2015', queued=0ms, exec=126ms
    neutron-openvswitch-agent_monitor_60000 on overcloud-controller-1 'not running' (7): call=301, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:58:49 2015', queued=0ms, exec=0ms
    neutron-server_monitor_60000 on overcloud-controller-1 'OCF_PENDING' (196): call=289, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 12:58:05 2015', queued=0ms, exec=0ms
    httpd_monitor_60000 on overcloud-controller-1 'OCF_PENDING' (196): call=260, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 12:58:03 2015', queued=0ms, exec=0ms


PCSD Status:
  overcloud-controller-0: Online
  overcloud-controller-1: Online
  overcloud-controller-2: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 
Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 3 Mike Burns 2015-08-03 20:54:32 EDT
Bug inadvertently opened twice

*** This bug has been marked as a duplicate of bug 1249846 ***

Note You need to log in before you can comment on or make changes to this bug.