Bug 1651720

Summary: kubevirt-apb deprovision gets stuck on kubevirt-web-ui removal
Product: Container Native Virtualization (CNV) Reporter: Lukas Bednar <lbednar>
Component: InstallationAssignee: Ohad Levy <ohadlevy>
Status: CLOSED WONTFIX QA Contact: Irina Gulina <igulina>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 1.3CC: cnv-qe-bugs, mlibra, ncredi, rhallise
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-12-18 13:08:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Lukas Bednar 2018-11-20 16:08:52 UTC
Description of problem:

When deprovisioning kubevirt-apb it gets stuck on removing kuevirt-web-ui task.

TASK [kubevirt_web_ui : include_tasks] *****************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/kubevirt_web_ui/tasks/deprovision.yml for localhost

TASK [kubevirt_web_ui : Remove kubevirt-web-ui project] ************************


Version-Release number of selected component (if applicable):
 	kubevirt-apb-v3.11-14

How reproducible: 100%


Steps to Reproduce:
1. Deploy kubevirt using APB
2. Deprovision kubevirt
3. observe logs of deprovisioning container

Actual results: deprovision hangs on webconsole removal


Expected results: it successfully deprovision kubevirt


Additional info:

[root@cnv-executor-ysegev-master1 ~]# oc logs -fn brew2-virtualization-depr-w9vht bundle-53f232b3-626d-41e5-8167-5aecbc1e3f34
DEPRECATED: APB playbooks should be stored at /opt/apb/project

PLAY [Deprovision KubeVirt] ****************************************************

TASK [ansible.kubernetes-modules : Install latest openshift client] ************
skipping: [localhost]

TASK [ansibleplaybookbundle.asb-modules : debug] *******************************
skipping: [localhost]

PLAY [all] *********************************************************************

TASK [Identify cluster] ********************************************************
^[[Achanged: [localhost]

TASK [Set cluster variable] ****************************************************
 [WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: {{ result.rc }} == 0

ok: [localhost]

TASK [Login As Super User] *****************************************************
changed: [localhost]

PLAY [masters[0]] **************************************************************

TASK [network-multus : include_tasks] ******************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/network-multus/tasks/deprovision.yml for localhost

TASK [network-multus : openshift cni config] ***********************************
ok: [localhost]

TASK [network-multus : kubernetes cni config] **********************************
skipping: [localhost]

TASK [network-multus : Render multus deployment yaml] **************************
changed: [localhost]

TASK [network-multus : Delete multus Resources] ********************************
changed: [localhost]

TASK [network-multus : Render cni plugins deployment yaml] *********************
changed: [localhost]

TASK [network-multus : Delete cni plugins Resources] ***************************
changed: [localhost]

TASK [network-multus : Render OVS plugin deployment yaml] **********************
changed: [localhost]

TASK [network-multus : Delete OVS plugin Resources] ****************************
changed: [localhost]

TASK [network-multus : Render ovs-vsctl deployment yaml] ***********************
changed: [localhost]

TASK [network-multus : Delete ovs-vsctl Resources] *****************************
changed: [localhost]

TASK [skydive : include_tasks] *************************************************
skipping: [localhost]

PLAY [masters[0]] **************************************************************

TASK [kubevirt : include_tasks] ************************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/tasks/deprovision.yml for localhost

TASK [kubevirt : Check that demo-content.yaml still exists in /tmp] ************
ok: [localhost]

TASK [kubevirt : Check for demo-content.yaml template in /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates] ***
ok: [localhost]

TASK [kubevirt : Download Demo Content] ****************************************
skipping: [localhost]

TASK [kubevirt : Copy BYO Demo Content to /tmp] ********************************
changed: [localhost]

TASK [kubevirt : Delete Demo Content] ******************************************
changed: [localhost]

TASK [kubevirt : Check that kubevirt.yaml still exists in /tmp] ****************
ok: [localhost]

TASK [kubevirt : Check for kubevirt.yml.j2 template in /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates] ***
ok: [localhost]

TASK [kubevirt : Download KubeVirt Template] ***********************************
skipping: [localhost]

TASK [kubevirt : Render KubeVirt template] *************************************
changed: [localhost]

TASK [kubevirt : Delete KubeVirt Resources] ************************************
changed: [localhost]

TASK [kubevirt : Delete Privileged Policy] *************************************
changed: [localhost] => (item=kubevirt-privileged)
changed: [localhost] => (item=kubevirt-controller)
changed: [localhost] => (item=kubevirt-infra)
changed: [localhost] => (item=kubevirt-apiserver)

TASK [kubevirt : Delete Hostmount-anyuid Policy] *******************************
changed: [localhost]

PLAY [masters[0]] **************************************************************

TASK [cdi : include_tasks] *****************************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/cdi/tasks/deprovision.yml for localhost

TASK [cdi : Render kube-system ResourceQuota deprovision yaml] *****************
changed: [localhost]

TASK [cdi : Delete kube-system ResourceQuota] **********************************
changed: [localhost]

TASK [cdi : Check that cdi-provision.yml still exists in /tmp] *****************
ok: [localhost]

TASK [cdi : Check for cdi-controller.yml.j2 template in /etc/ansible/roles/kubevirt-ansible/roles/cdi/templates] ***
ok: [localhost]

TASK [cdi : Download CDI Template] *********************************************
skipping: [localhost]

TASK [cdi : Render CDI deprovision yaml] ***************************************
changed: [localhost]

TASK [cdi : Delete CDI Resources] **********************************************
changed: [localhost]

PLAY [Kubevirt Web UI Install Checkpoint Start] ********************************

TASK [Set Console install 'In Progress'] ***************************************
ok: [localhost]

PLAY [Kubevirt Web UI] *********************************************************

TASK [kubevirt_web_ui : include_tasks] *****************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/kubevirt_web_ui/tasks/required_params.yml for localhost

TASK [kubevirt_web_ui : set_fact] **********************************************
skipping: [localhost]

TASK [kubevirt_web_ui : set_fact] **********************************************
ok: [localhost]

TASK [kubevirt_web_ui : set_fact] **********************************************
ok: [localhost]

TASK [kubevirt_web_ui : Verify required kubevirt_web_ui_image_name variable is set] ***
skipping: [localhost]

TASK [kubevirt_web_ui : Discover public_master_hostname from openshift console deployment] ***
changed: [localhost]

TASK [kubevirt_web_ui : set_fact] **********************************************
ok: [localhost]

TASK [kubevirt_web_ui : Discover openshift_master_default_subdomain from openshift console deployment] ***
changed: [localhost]

TASK [kubevirt_web_ui : set_fact] **********************************************
ok: [localhost]

TASK [kubevirt_web_ui : set_fact] **********************************************
ok: [localhost]

TASK [kubevirt_web_ui : set_fact] **********************************************
ok: [localhost]

TASK [kubevirt_web_ui : set_fact] **********************************************
ok: [localhost]

TASK [kubevirt_web_ui : set_fact] **********************************************
ok: [localhost]

TASK [kubevirt_web_ui : include_tasks] *****************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/kubevirt_web_ui/tasks/deprovision.yml for localhost

TASK [kubevirt_web_ui : Remove kubevirt-web-ui project] ************************


[root@cnv-executor-ysegev-master1 ~]# oc get namespaces
NAME                                STATUS        AGE
kubevirt-web-ui                     Terminating   7m


[root@cnv-executor-ysegev-master1 ~]# oc get pods -n kubevirt-web-ui
NAME                       READY     STATUS    RESTARTS   AGE
console-78fd8f977c-llbm6   1/1       Running   0          8m
[root@cnv-executor-ysegev-master1 ~]# oc get pods -n kubevirt-web-ui -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      k8s.v1.cni.cncf.io/networks-status: |-
        [{
            "name": "openshift.1",
            "ips": [
                "10.130.0.15"
            ],
            "default": true,
            "dns": {}
        }]
      openshift.io/scc: restricted
    creationTimestamp: 2018-11-20T15:46:57Z
    generateName: console-78fd8f977c-
    labels:
      app: kubevirt-web-ui
      component: ui
      pod-template-hash: "3498495337"
    name: console-78fd8f977c-llbm6
    namespace: kubevirt-web-ui
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: console-78fd8f977c
      uid: 8321ea76-ecdb-11e8-9dfa-fa163ef8ef62
    resourceVersion: "6244"
    selfLink: /api/v1/namespaces/kubevirt-web-ui/pods/console-78fd8f977c-llbm6
    uid: 83252e0a-ecdb-11e8-9dfa-fa163ef8ef62
  spec:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchLabels:
                app: kubevirt-web-ui
            topologyKey: kubernetes.io/hostname
          weight: 100
    containers:
    - command:
      - /opt/bridge/bin/bridge
      - --public-dir=/opt/bridge/static
      - --config=/var/console-config/console-config.yaml
      - --branding=okdvirt
      image: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv-tech-preview/kubevirt-web-ui:v1.3.0
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 3
        httpGet:
          path: /health
          port: 8443
          scheme: HTTPS
        initialDelaySeconds: 30
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      name: console
      ports:
      - containerPort: 8443
        protocol: TCP
      readinessProbe:
        failureThreshold: 3
        httpGet:
          path: /health
          port: 8443
          scheme: HTTPS
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      resources:
        limits:
          cpu: 100m
          memory: 100Mi
        requests:
          cpu: 100m
          memory: 100Mi
      securityContext:
        capabilities:
          drop:
          - KILL
          - MKNOD
          - SETGID
          - SETUID
        runAsUser: 1000370000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/serving-cert
        name: serving-cert
        readOnly: true
      - mountPath: /var/oauth-config
        name: oauth-config
        readOnly: true
      - mountPath: /var/console-config
        name: console-config
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-t9wh2
        readOnly: true
    dnsPolicy: ClusterFirst
    imagePullSecrets:
    - name: default-dockercfg-pzntn
    nodeName: cnv-executor-ysegev-node2.example.com
    nodeSelector:
      node-role.kubernetes.io/compute: "true"
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1000370000
      seLinuxOptions:
        level: s0:c19,c14
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
    volumes:
    - name: serving-cert
      secret:
        defaultMode: 288
        secretName: console-serving-cert
    - name: oauth-config
      secret:
        defaultMode: 288
        secretName: console-oauth-config
    - configMap:
        defaultMode: 288
        name: console-config
      name: console-config
    - name: default-token-t9wh2
      secret:
        defaultMode: 420
        secretName: default-token-t9wh2
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: 2018-11-20T15:46:58Z
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: 2018-11-20T15:47:28Z
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: null
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: 2018-11-20T15:46:57Z
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: cri-o://856e1e8dbf6fcd0696172ebe07e286bb5b72c75f01794c8ceabc5e9357616e5e
      image: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv-tech-preview/kubevirt-web-ui:v1.3.0
      imageID: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv-tech-preview/kubevirt-web-ui@sha256:e2dec321bb3f36b9017af581747e3773caed8ad6b731abe2f3fa6efa8e5538be
      lastState: {}
      name: console
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2018-11-20T15:47:23Z
    hostIP: 172.16.0.13
    phase: Running
    podIP: 10.130.0.15
    qosClass: Guaranteed
    startTime: 2018-11-20T15:46:58Z
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
[root@cnv-executor-ysegev-master1 ~]# oc describe pods -n kubevirt-web-ui
Name:               console-78fd8f977c-llbm6
Namespace:          kubevirt-web-ui
Priority:           0
PriorityClassName:  <none>
Node:               cnv-executor-ysegev-node2.example.com/172.16.0.13
Start Time:         Tue, 20 Nov 2018 10:46:58 -0500
Labels:             app=kubevirt-web-ui
                    component=ui
                    pod-template-hash=3498495337
Annotations:        k8s.v1.cni.cncf.io/networks-status=[{
    "name": "openshift.1",
    "ips": [
        "10.130.0.15"
    ],
    "default": true,
    "dns": {}
}]
                openshift.io/scc=restricted
Status:         Running
IP:             10.130.0.15
Controlled By:  ReplicaSet/console-78fd8f977c
Containers:
  console:
    Container ID:  cri-o://856e1e8dbf6fcd0696172ebe07e286bb5b72c75f01794c8ceabc5e9357616e5e
    Image:         brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv-tech-preview/kubevirt-web-ui:v1.3.0
    Image ID:      brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv-tech-preview/kubevirt-web-ui@sha256:e2dec321bb3f36b9017af581747e3773caed8ad6b731abe2f3fa6efa8e5538be
    Port:          8443/TCP
    Host Port:     0/TCP
    Command:
      /opt/bridge/bin/bridge
      --public-dir=/opt/bridge/static
      --config=/var/console-config/console-config.yaml
      --branding=okdvirt
    State:          Running
      Started:      Tue, 20 Nov 2018 10:47:23 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:        100m
      memory:     100Mi
    Liveness:     http-get https://:8443/health delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get https://:8443/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/console-config from console-config (rw)
      /var/oauth-config from oauth-config (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-t9wh2 (ro)
      /var/serving-cert from serving-cert (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  console-serving-cert
    Optional:    false
  oauth-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  console-oauth-config
    Optional:    false
  console-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      console-config
    Optional:  false
  default-token-t9wh2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-t9wh2
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  node-role.kubernetes.io/compute=true
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
Events:
  Type     Reason       Age   From                                            Message
  ----     ------       ----  ----                                            -------
  Normal   Scheduled    8m    default-scheduler                               Successfully assigned kubevirt-web-ui/console-78fd8f977c-llbm6 to cnv-executor-ysegev-node2.example.com
  Warning  FailedMount  8m    kubelet, cnv-executor-ysegev-node2.example.com  MountVolume.SetUp failed for volume "serving-cert" : secrets "console-serving-cert" not found
  Normal   Pulling      8m    kubelet, cnv-executor-ysegev-node2.example.com  pulling image "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv-tech-preview/kubevirt-web-ui:v1.3.0"
  Normal   Pulled       8m    kubelet, cnv-executor-ysegev-node2.example.com  Successfully pulled image "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv-tech-preview/kubevirt-web-ui:v1.3.0"
  Normal   Created      8m    kubelet, cnv-executor-ysegev-node2.example.com  Created container
  Normal   Started      8m    kubelet, cnv-executor-ysegev-node2.example.com  Started container

Comment 1 Nelly Credi 2018-12-18 13:08:28 UTC
This should work in the operator
we will not fix the APB