Bug 1347181 - Upgrade failed due to different oadm path in mixed installation
Summary: Upgrade failed due to different oadm path in mixed installation
Keywords:
Status: CLOSED DUPLICATE of bug 1364160
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cluster Version Operator
Version: 3.2.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Devan Goodwin
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-16 09:12 UTC by Anping Li
Modified: 2016-08-16 17:36 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-16 17:36:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Anping Li 2016-06-16 09:12:01 UTC
Description of problem:
For RPM installation, the oadm path is /usr/bin/oadm, for Containerlized installation, the oadm path is /usr/local/bin/oadm. For atomic host, oadm path is  /var/usrlocal/bin/oadm.  The ugprade failed due to /usr/local/bin/oadm is used for RPM installed Master.



Version-Release number of selected component (if applicable):
atomic-openshift-utils-3.2.3-1.git.0.3ba88fa.el7.noarch

How reproducible:
always


Steps to Reproduce:
1) install openshift with mixed platform
   
[OSEv3:children]
masters
nodes
etcd
lb
nfs

[OSEv3:vars]
ansible_ssh_user=root
openshift_use_openshift_sdn=true
deployment_type=openshift-enterprise
openshift_use_dnsmasq=False
#osm_use_cockpit=false
openshift_master_default_subdomain=host2.example.com
openshift_docker_additional_registries=virt-openshift-05.lab.eng.nay.redhat.com:5000
openshift_docker_insecure_registries=virt-openshift-05.lab.eng.nay.redhat.com:5000
oreg_url=virt-openshift-05.lab.eng.nay.redhat.com:5000/openshift3/ose-${component}:${version}
openshift_node_kubelet_args={'minimum-container-ttl-duration': ["10s"], 'maximum-dead-containers-per-container': ["1"], 'maximum-dead-containers': ["20"], 'image-gc-high-threshold': ["20"], 'image-gc-low-threshold': ["20"]}
openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]

openshift_master_cluster_method=native
openshift_master_cluster_hostname=ha1master.example.com
openshift_master_cluster_public_hostname=ha1master.example.com

openshift_hosted_router_selector='region=route'
openshift_hosted_router_replicas=1
openshift_hosted_router_certificate={"certfile": "/root/ha1/config/router.crt", "keyfile": "/root/ha1/config/router.key"}



#openshift_install_examples=true
#use_cluster_metrics=true

[masters]
ha1master1.example.com  openshift_hostname=ha1master1.example.com openshift_public_hostname=ha1master1.example.com
ha1master2.example.com  openshift_hostname=ha1master2.example.com openshift_public_hostname=ha1master2.example.com
ha1master3.example.com  openshift_hostname=ha1master3.example.com openshift_public_hostname=ha1master3.example.com

[etcd]
ha1master1.example.com  openshift_hostname=ha1master1.example.com openshift_public_hostname=ha1master1.example.com
ha1master2.example.com  openshift_hostname=ha1master2.example.com openshift_public_hostname=ha1master2.example.com
ha1master3.example.com  openshift_hostname=ha1master3.example.com openshift_public_hostname=ha1master3.example.com

[nodes]
ha1master1.example.com  openshift_node_labels="{'region': 'primary', 'zone': 'default'}" openshift_hostname=ha1master1.example.com openshift_public_hostname=ha1master1.example.com openshift_schedulable=true
ha1master2.example.com  openshift_node_labels="{'region': 'primary', 'zone': 'default'}" openshift_hostname=ha1master2.example.com openshift_public_hostname=ha1master2.example.com openshift_schedulable=true containerized=true
ha1master3.example.com  openshift_node_labels="{'region': 'primary', 'zone': 'default'}" openshift_hostname=ha1master3.example.com openshift_public_hostname=ha1master3.example.com openshift_schedulable=true containerized=true
ha1node1.example.com  openshift_node_labels="{'region': 'route', 'zone': 'east'}" openshift_hostname=ha1node1.example.com openshift_public_hostname=ha1node1.example.com
ha1node2.example.com  openshift_node_labels="{'region': 'infra', 'zone': 'west'}" openshift_hostname=ha1node2.example.com openshift_public_hostname=ha1node2.example.com containerized=true

[lb]
ha1master.example.com

[nfs]
ha1master1.example.com
                                            


2) upgrade to OSE 3.2.1
   ansible-playbook -i config/mixha /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_1_to_v3_2/upgrade.yml -vvvv|tee upgrade.logs


Actual Result:

TASK: [Prepare for Node evacuation] ******************************************* 
<ha1master1.example.com> ESTABLISH CONNECTION FOR USER: root
<ha1master1.example.com> REMOTE_MODULE command /usr/local/bin/oadm manage-node ha1master2.example.com --schedulable=false
<ha1master1.example.com> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 ha1master1.example.com /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1466061769.73-257138486861584 && echo $HOME/.ansible/tmp/ansible-tmp-1466061769.73-257138486861584'
<ha1master1.example.com> PUT /tmp/tmpL3vfRk TO /root/.ansible/tmp/ansible-tmp-1466061769.73-257138486861584/command
<ha1master1.example.com> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 ha1master1.example.com /bin/sh -c 'LANG=C LC_CTYPE=C /usr/bin/python /root/.ansible/tmp/ansible-tmp-1466061769.73-257138486861584/command; rm -rf /root/.ansible/tmp/ansible-tmp-1466061769.73-257138486861584/ >/dev/null 2>&1'
failed: [ha1master2.example.com -> ha1master1.example.com] => {"cmd": "/usr/local/bin/oadm manage-node ha1master2.example.com --schedulable=false", "failed": true, "rc": 2}
msg: [Errno 2] No such file or directory

FATAL: all hosts have already failed -- aborting

Expected Result:

Comment 1 Devan Goodwin 2016-08-16 17:36:03 UTC

*** This bug has been marked as a duplicate of bug 1364160 ***


Note You need to log in before you can comment on or make changes to this bug.