Bug 1249846

Summary: [RFE] redhat big switch integration
Product: Red Hat OpenStack Reporter: bigswitch <rhosp-bugs-internal>
Component: rhosp-directorAssignee: chris alfonso <calfonso>
Status: CLOSED DEFERRED QA Contact: yeylon <yeylon>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.0 (Kilo)CC: hbrock, mburns, rhel-osp-director-maint, srevivo
Target Milestone: ---Keywords: FutureFeature, ZStream
Target Release: Director   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-08-28 17:40:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description bigswitch 2015-08-04 00:46:23 UTC
Description of problem:

As part of the integration, we need to 1) delete neutron-dhcp-agent,
neutron-l3-agent, neutron-metadata-agent from openstack controller nodes, and bring
them up on multiple compute nodes. 2) change keystone configuration and restart keystone on all controller nodes.

However, once we do it, the pacemaker is in bad state and all keystone
authentication fails.

Following is the detail.
To disable neutron l3/dhcp/metadata agents, we use following pcs commands

    sudo pcs resource disable neutron-metadata-agent-clone
    sudo pcs resource disable neutron-metadata-agent
    sudo pcs resource cleanup neutron-metadata-agent-clone
    sudo pcs resource cleanup neutron-metadata-agent
    sudo pcs resource delete neutron-metadata-agent-clone
    sudo pcs resource delete neutron-metadata-agent

    sudo pcs resource disable neutron-dhcp-agent-clone
    sudo pcs resource disable neutron-dhcp-agent
    sudo pcs resource cleanup neutron-dhcp-agent-clone
    sudo pcs resource cleanup neutron-dhcp-agent
    sudo pcs resource delete neutron-dhcp-agent-clone
    sudo pcs resource delete neutron-dhcp-agent

    sudo pcs resource disable neutron-l3-agent-clone
    sudo pcs resource disable neutron-l3-agent
    sudo pcs resource cleanup neutron-l3-agent-clone
    sudo pcs resource cleanup neutron-l3-agent
    sudo pcs resource delete neutron-l3-agent-clone
    sudo pcs resource delete neutron-l3-agent

Then pacemaker is complaining following errors
[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
Cluster name: tripleo_cluster
Last updated: Fri Jul 31 13:34:46 2015
Last change: Fri Jul 31 12:54:18 2015
Stack: corosync
Current DC: overcloud-controller-2 (3) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
103 Resources configured


Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:

 ip-172.17.0.11	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-0 
 Clone Set: haproxy-clone [haproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-172.18.0.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-1 
 ip-192.168.2.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-2 
 ip-172.17.0.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-0 
 ip-192.168.1.90	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-1 
 Master/Slave Set: galera-master [galera]
     Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-172.19.0.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-2 
 Master/Slave Set: redis-master [redis]
     Masters: [ overcloud-controller-0 ]
     Slaves: [ overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: mongod-clone [mongod]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: memcached-clone [memcached]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-alarm-notifier-clone [openstack-ceilometer-alarm-notifier]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-clone [openstack-heat-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-keystone-clone [openstack-keystone]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: delay-clone [delay]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: httpd-clone [httpd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-alarm-evaluator-clone [openstack-ceilometer-alarm-evaluator]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 openstack-cinder-volume	(systemd:openstack-cinder-volume):	Started overcloud-controller-0 
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Failed actions:
    openstack-heat-api_monitor_60000 on overcloud-controller-0 'not running' (7): call=163, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:52:25 2015', queued=0ms, exec=5ms
    neutron-openvswitch-agent_monitor_60000 on overcloud-controller-0 'not running' (7): call=320, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:58:23 2015', queued=0ms, exec=0ms
    neutron-server_monitor_60000 on overcloud-controller-0 'not running' (7): call=314, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:58:18 2015', queued=0ms, exec=0ms
    neutron-openvswitch-agent_monitor_60000 on overcloud-controller-2 'not running' (7): call=296, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:57:49 2015', queued=0ms, exec=0ms
    openstack-heat-api_monitor_60000 on overcloud-controller-1 'not running' (7): call=162, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:52:25 2015', queued=0ms, exec=126ms
    neutron-openvswitch-agent_monitor_60000 on overcloud-controller-1 'not running' (7): call=301, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 04:58:49 2015', queued=0ms, exec=0ms
    neutron-server_monitor_60000 on overcloud-controller-1 'OCF_PENDING' (196): call=289, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 12:58:05 2015', queued=0ms, exec=0ms
    httpd_monitor_60000 on overcloud-controller-1 'OCF_PENDING' (196): call=260, status=complete, exit-reason='none', last-rc-change='Fri Jul 31 12:58:03 2015', queued=0ms, exec=0ms


PCSD Status:
  overcloud-controller-0: Online
  overcloud-controller-1: Online
  overcloud-controller-2: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 
Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Mike Burns 2015-08-04 00:54:32 UTC
*** Bug 1249847 has been marked as a duplicate of this bug. ***

Comment 5 chris alfonso 2015-08-28 17:40:58 UTC
This is being approached in a different manner entirely at this point. No need to track it with this bug.