Bug 1827569 - [Azure] Couple of OSDs goes into 'CrashLoopBackOff' state after disk Host Caching was changed on Azure Platform
Summary: [Azure] Couple of OSDs goes into 'CrashLoopBackOff' state after disk Host Cac...
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: rook
Version: 4.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Sébastien Han
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks: 1797475 1859307 1879029
TreeView+ depends on / blocked
 
Reported: 2020-04-24 08:40 UTC by Shekhar Berry
Modified: 2023-09-15 00:31 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Some OSDs can become stuck in the `CrashLoopBackOff` state when an OpenShift Container Storage cluster deployed using Microsoft Azure is restarted.
Clone Of:
: 1879029 (view as bug list)
Environment:
Last Closed: 2021-03-03 10:05:18 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift rook pull 47 0 None closed Bug 1830702: ceph: do not fail on deactivate 2021-02-15 14:54:17 UTC

Description Shekhar Berry 2020-04-24 08:40:05 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Couple of OSDs keeps going to  'CrashLoopBackOff' state after restarts. This is happening on OCS 4.3 environment installed on OCP 4.3 backed by Microsoft Azure.The setup was running smoothly for last 11 days and this problem was observed recently.

I started observing this after I changed the host caching policy of the disk in Azure from Read-Only to None. I did the change to all the 3 disks on 3 different VMs but problem was observed on only 2 OSDs. I am not sure if this is the root cause of the issue but stating it here for the sake of providing complete information. 


Version of all relevant components (if applicable):

oc get csv
NAME                            DISPLAY                       VERSION        REPLACES   PHASE
lib-bucket-provisioner.v1.0.0   lib-bucket-provisioner        1.0.0                     Succeeded
ocs-operator.v4.3.0-407.ci      OpenShift Container Storage   4.3.0-407.ci              Succeeded


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
yes. I am unable to proceed with performance analysis


Is there any workaround available to the best of your knowledge?
I am not aware of any workaround


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Not sure

Can this issue reproduce from the UI?
Not sure

Additional info:

OCS Must Gather: http://perf1.perf.lab.eng.bos.redhat.com/pub/shberry/OCS_on_Azure/ocs_must_gather/

OCP Must Gather: http://perf1.perf.lab.eng.bos.redhat.com/pub/shberry/OCS_on_Azure/ocp_must_gather/

Node Logs: http://perf1.perf.lab.eng.bos.redhat.com/pub/shberry/OCS_on_Azure/node_logs/

Please see my next comments to see the output of oc get pods and oc describe pod

Comment 2 Shekhar Berry 2020-04-24 08:50:55 UTC
oc get pods -n openshift-storage
NAME                                                              READY   STATUS             RESTARTS   AGE
csi-cephfsplugin-6snk7                                            3/3     Running            0          11d
csi-cephfsplugin-pmxcp                                            3/3     Running            0          11d
csi-cephfsplugin-provisioner-5b8bbcfdf-j2zh8                      5/5     Running            0          11d
csi-cephfsplugin-provisioner-5b8bbcfdf-kdwxn                      5/5     Running            0          11d
csi-cephfsplugin-z2gtp                                            3/3     Running            0          11d
csi-rbdplugin-2hjbg                                               3/3     Running            0          11d
csi-rbdplugin-provisioner-5dd4779bc4-66q58                        5/5     Running            0          11d
csi-rbdplugin-provisioner-5dd4779bc4-kbvb9                        5/5     Running            0          11d
csi-rbdplugin-rdwbf                                               3/3     Running            0          11d
csi-rbdplugin-s4xwm                                               3/3     Running            0          11d
lib-bucket-provisioner-55f74d96f6-zz6xj                           1/1     Running            0          11d
noobaa-core-0                                                     1/1     Running            0          11d
noobaa-db-0                                                       1/1     Running            0          11d
noobaa-endpoint-78bc88d86c-lr6vk                                  1/1     Running            0          11d
noobaa-operator-796f85fff6-phs2m                                  1/1     Running            0          11d
ocs-operator-5f989d6586-lk74j                                     1/1     Running            0          11d
rook-ceph-crashcollector-29808b6eec7b1db24ca91f8a4844da13-xhcmc   1/1     Running            0          11d
rook-ceph-crashcollector-5d85441db4a28edf9b3b03fce330cb90-6flhm   1/1     Running            0          11d
rook-ceph-crashcollector-dea95a2e5d69ee92594f82c516ad797b-82wfm   1/1     Running            0          11d
rook-ceph-drain-canary-29808b6eec7b1db24ca91f8a4844da13-69qkb7b   1/1     Running            0          11d
rook-ceph-drain-canary-5d85441db4a28edf9b3b03fce330cb90-8fsw94f   1/1     Running            0          11d
rook-ceph-drain-canary-dea95a2e5d69ee92594f82c516ad797b-55kswgc   1/1     Running            0          11d
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-78764fdd8cm4n   1/1     Running            0          11d
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-7d9dc44cf5lvn   1/1     Running            0          11d
rook-ceph-mgr-a-558bbbb574-9gd97                                  1/1     Running            0          11d
rook-ceph-mon-a-588b8f98f5-lmr6p                                  1/1     Running            0          11d
rook-ceph-mon-b-795fbb995b-9mf5j                                  1/1     Running            0          11d
rook-ceph-mon-c-86dbb99fb5-8hfnv                                  1/1     Running            0          11d
rook-ceph-operator-7f48d5c8fd-xg4jw                               1/1     Running            0          11d
rook-ceph-osd-0-7d68bb854-kksdb                                   0/1     CrashLoopBackOff   81         11d
rook-ceph-osd-1-66647998d4-46trk                                  0/1     CrashLoopBackOff   81         11d
rook-ceph-osd-2-cc66d6dc-92ps2                                    1/1     Running            0          11d
rook-ceph-osd-prepare-ocs-deviceset-0-0-td5db-7nn7p               0/1     Completed          0          11d
rook-ceph-osd-prepare-ocs-deviceset-1-0-vmz7q-74zch               0/1     Completed          0          11d
rook-ceph-osd-prepare-ocs-deviceset-2-0-jpqvv-thmrl               0/1     Completed          0          11d
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-fdff986mfsh7   1/1     Running            0          11d
rook-ceph-tools-57fd596bb4-5pn7n                                  1/1     Running            0          10d

=========================================================================================================================================================================================

oc describe pod rook-ceph-osd-0-7d68bb854-kksdb -n openshift-storage
Name:         rook-ceph-osd-0-7d68bb854-kksdb
Namespace:    openshift-storage
Priority:     0
Node:         shberry-test-apr12-oc-l8jhd-worker-eastus1-sjw5t/10.0.32.6
Start Time:   Mon, 13 Apr 2020 02:04:54 +0530
Labels:       app=rook-ceph-osd
              ceph-osd-id=0
              ceph.rook.io/pvc=ocs-deviceset-1-0-vmz7q
              failure-domain=ocs-deviceset-1-0-vmz7q
              pod-template-hash=7d68bb854
              portable=true
              rook_cluster=openshift-storage
Annotations:  k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.25"
                    ],
                    "dns": {},
                    "default-route": [
                        "10.128.2.1"
                    ]
                }]
              openshift.io/scc: rook-ceph
Status:       Running
IP:           10.128.2.25
IPs:
  IP:           10.128.2.25
Controlled By:  ReplicaSet/rook-ceph-osd-0-7d68bb854
Init Containers:
  config-init:
    Container ID:  cri-o://61fc6fcaebe6124de946240163854ab74957a19c2f1b87d45816f1663adf4164
    Image:         quay.io/rhceph-dev/rook-ceph@sha256:8dee92b1f069fe7d5a00d4427a56b15f55034d58013e0f30bb68859bbc608914
    Image ID:      quay.io/rhceph-dev/rook-ceph@sha256:8dee92b1f069fe7d5a00d4427a56b15f55034d58013e0f30bb68859bbc608914
    Port:          <none>
    Host Port:     <none>
    Args:
      ceph
      osd
      init
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 13 Apr 2020 02:05:02 +0530
      Finished:     Mon, 13 Apr 2020 02:05:02 +0530
    Ready:          True
    Restart Count:  0
    Environment:
      ROOK_NODE_NAME:               ocs-deviceset-1-0-vmz7q
      ROOK_CLUSTER_ID:              d3109e5c-4da4-475d-87ac-92bcf0ac69a3
      ROOK_PRIVATE_IP:               (v1:status.podIP)
      ROOK_PUBLIC_IP:                (v1:status.podIP)
      ROOK_CLUSTER_NAME:            openshift-storage
      ROOK_MON_ENDPOINTS:           <set to the key 'data' of config map 'rook-ceph-mon-endpoints'>  Optional: false
      ROOK_MON_SECRET:              <set to the key 'mon-secret' in secret 'rook-ceph-mon'>          Optional: false
      ROOK_ADMIN_SECRET:            <set to the key 'admin-secret' in secret 'rook-ceph-mon'>        Optional: false
      ROOK_CONFIG_DIR:              /var/lib/rook
      ROOK_CEPH_CONFIG_OVERRIDE:    /etc/rook/config/override.conf
      ROOK_FSID:                    <set to the key 'fsid' in secret 'rook-ceph-mon'>  Optional: false
      NODE_NAME:                     (v1:spec.nodeName)
      ROOK_CRUSHMAP_HOSTNAME:       ocs-deviceset-1-0-vmz7q
      CEPH_VOLUME_DEBUG:            1
      CEPH_VOLUME_SKIP_RESTORECON:  1
      DM_DISABLE_UDEV:              1
      TINI_SUBREAPER:               
      ROOK_OSD_ID:                  0
      ROOK_CEPH_VERSION:            ceph version 14.2.4-125 nautilus
      ROOK_IS_DEVICE:               true
    Mounts:
      /etc/ceph from rook-config-override (ro)
      /var/lib/ceph/crash from rook-ceph-crash (rw)
      /var/lib/rook from rook-data (rw)
      /var/log/ceph from rook-ceph-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
  copy-bins:
    Container ID:  cri-o://53e7b8766025eb61982ef42fb4ed7f81a922cb36f8a7b6785e7941e5e1279fdc
    Image:         quay.io/rhceph-dev/rook-ceph@sha256:8dee92b1f069fe7d5a00d4427a56b15f55034d58013e0f30bb68859bbc608914
    Image ID:      quay.io/rhceph-dev/rook-ceph@sha256:8dee92b1f069fe7d5a00d4427a56b15f55034d58013e0f30bb68859bbc608914
    Port:          <none>
    Host Port:     <none>
    Args:
      copy-binaries
      --copy-to-dir
      /rook
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 13 Apr 2020 02:05:03 +0530
      Finished:     Mon, 13 Apr 2020 02:05:03 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /rook from rook-binaries (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
  chown-container-data-dir:
    Container ID:  cri-o://3ae8b06b64808a78a2943525d989a3be2ef52b6fac332a3ecc0314af5a1a6903
    Image:         quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Image ID:      quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Port:          <none>
    Host Port:     <none>
    Command:
      chown
    Args:
      --verbose
      --recursive
      ceph:ceph
      /var/log/ceph
      /var/lib/ceph/crash
      /var/lib/rook/osd0
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 13 Apr 2020 02:05:04 +0530
      Finished:     Mon, 13 Apr 2020 02:05:04 +0530
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  8Gi
    Requests:
      cpu:        1
      memory:     4Gi
    Environment:  <none>
    Mounts:
      /dev from devices (rw)
      /etc/ceph from rook-config-override (ro)
      /mnt from ocs-deviceset-1-0-vmz7q-bridge (rw)
      /rook from rook-binaries (rw)
      /run/udev from run-udev (rw)
      /var/lib/ceph/crash from rook-ceph-crash (rw)
      /var/lib/rook from rook-data (rw)
      /var/log/ceph from rook-ceph-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
  blkdevmapper:
    Container ID:  cri-o://288ea5c8429653ceae8184bab0bf25f0d2fb4a1263e0d717c3f0e94b5d708573
    Image:         quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Image ID:      quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -a
      /ocs-deviceset-1-0-vmz7q
      /mnt/ocs-deviceset-1-0-vmz7q
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 13 Apr 2020 02:05:05 +0530
      Finished:     Mon, 13 Apr 2020 02:05:05 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt from ocs-deviceset-1-0-vmz7q-bridge (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
    Devices:
      /ocs-deviceset-1-0-vmz7q from ocs-deviceset-1-0-vmz7q
Containers:
  osd:
    Container ID:  cri-o://ff1d6ab566fd582cfba05154c6b6d6bf8fe9a3f137c569419955f8449cea69ce
    Image:         quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Image ID:      quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Port:          <none>
    Host Port:     <none>
    Command:
      /rook/tini
    Args:
      --
      /rook/rook
      ceph
      osd
      start
      --
      --foreground
      --id
      0
      --fsid
      45ff4fa7-1275-427b-8934-05ebaab49beb
      --cluster
      ceph
      --setuser
      ceph
      --setgroup
      ceph
      --setuser-match-path
      /var/lib/rook/osd0
      --crush-location=root=default host=ocs-deviceset-1-0-vmz7q region=eastus zone=eastus-1
      --default-log-to-file
      false
      --ms-learn-addr-from-peer=false
    State:          Terminated
      Reason:       Error
      Message:      failed to deactivate volume group for lv "/dev/ceph-b426f3a2-fec0-46f4-8837-7b790f0b8ebe/osd-block-e0875bbf-f448-4155-9de5-6bcd157fae5f": Failed to complete '': exit status 5. 
      Exit Code:    1
      Started:      Fri, 24 Apr 2020 14:10:22 +0530
      Finished:     Fri, 24 Apr 2020 14:10:22 +0530
    Last State:     Terminated
      Reason:       Error
      Message:      failed to deactivate volume group for lv "/dev/ceph-b426f3a2-fec0-46f4-8837-7b790f0b8ebe/osd-block-e0875bbf-f448-4155-9de5-6bcd157fae5f": Failed to complete '': exit status 5. 
      Exit Code:    1
      Started:      Fri, 24 Apr 2020 14:05:11 +0530
      Finished:     Fri, 24 Apr 2020 14:05:11 +0530
    Ready:          False
    Restart Count:  82
    Limits:
      cpu:     2
      memory:  8Gi
    Requests:
      cpu:     1
      memory:  4Gi
    Environment:
      ROOK_NODE_NAME:               ocs-deviceset-1-0-vmz7q
      ROOK_CLUSTER_ID:              d3109e5c-4da4-475d-87ac-92bcf0ac69a3
      ROOK_PRIVATE_IP:               (v1:status.podIP)
      ROOK_PUBLIC_IP:                (v1:status.podIP)
      ROOK_CLUSTER_NAME:            openshift-storage
      ROOK_MON_ENDPOINTS:           <set to the key 'data' of config map 'rook-ceph-mon-endpoints'>  Optional: false
      ROOK_MON_SECRET:              <set to the key 'mon-secret' in secret 'rook-ceph-mon'>          Optional: false
      ROOK_ADMIN_SECRET:            <set to the key 'admin-secret' in secret 'rook-ceph-mon'>        Optional: false
      ROOK_CONFIG_DIR:              /var/lib/rook
      ROOK_CEPH_CONFIG_OVERRIDE:    /etc/rook/config/override.conf
      ROOK_FSID:                    <set to the key 'fsid' in secret 'rook-ceph-mon'>  Optional: false
      NODE_NAME:                     (v1:spec.nodeName)
      ROOK_CRUSHMAP_HOSTNAME:       ocs-deviceset-1-0-vmz7q
      CEPH_VOLUME_DEBUG:            1
      CEPH_VOLUME_SKIP_RESTORECON:  1
      DM_DISABLE_UDEV:              1
      TINI_SUBREAPER:               
      CONTAINER_IMAGE:              quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
      POD_NAME:                     rook-ceph-osd-0-7d68bb854-kksdb (v1:metadata.name)
      POD_NAMESPACE:                openshift-storage (v1:metadata.namespace)
      NODE_NAME:                     (v1:spec.nodeName)
      POD_MEMORY_LIMIT:             8589934592 (limits.memory)
      POD_MEMORY_REQUEST:           4294967296 (requests.memory)
      POD_CPU_LIMIT:                2 (limits.cpu)
      POD_CPU_REQUEST:              1 (requests.cpu)
      ROOK_OSD_UUID:                e0875bbf-f448-4155-9de5-6bcd157fae5f
      ROOK_OSD_ID:                  0
      ROOK_OSD_STORE_TYPE:          bluestore
      ROOK_CEPH_MON_HOST:           <set to the key 'mon_host' in secret 'rook-ceph-config'>  Optional: false
      CEPH_ARGS:                    -m $(ROOK_CEPH_MON_HOST)
      ROOK_PVC_BACKED_OSD:          true
      ROOK_LV_PATH:                 /dev/ceph-b426f3a2-fec0-46f4-8837-7b790f0b8ebe/osd-block-e0875bbf-f448-4155-9de5-6bcd157fae5f
      ROOK_LV_BACKED_PV:            false
    Mounts:
      /dev from devices (rw)
      /etc/ceph from rook-config-override (ro)
      /mnt from ocs-deviceset-1-0-vmz7q-bridge (rw)
      /rook from rook-binaries (rw)
      /run/udev from run-udev (rw)
      /var/lib/ceph/crash from rook-ceph-crash (rw)
      /var/lib/rook from rook-data (rw)
      /var/log/ceph from rook-ceph-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  rook-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/rook
    HostPathType:  
  rook-config-override:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rook-config-override
    Optional:  false
  rook-ceph-log:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/rook/openshift-storage/log
    HostPathType:  
  rook-ceph-crash:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/rook/openshift-storage/crash
    HostPathType:  
  devices:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  
  ocs-deviceset-1-0-vmz7q:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  ocs-deviceset-1-0-vmz7q
    ReadOnly:   false
  ocs-deviceset-1-0-vmz7q-bridge:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  run-udev:
    Type:          HostPath (bare host directory volume)
    Path:          /run/udev
    HostPathType:  
  rook-binaries:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  rook-ceph-osd-token-blwbf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rook-ceph-osd-token-blwbf
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
                 node.ocs.openshift.io/storage=true:NoSchedule
Events:
  Type     Reason   Age                       From                                                       Message
  ----     ------   ----                      ----                                                       -------
  Normal   Pulled   37m (x76 over 11d)        kubelet, shberry-test-apr12-oc-l8jhd-worker-eastus1-sjw5t  Container image "quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d" already present on machine
  Warning  BackOff  2m13s (x1823 over 6h37m)  kubelet, shberry-test-apr12-oc-l8jhd-worker-eastus1-sjw5t  Back-off restarting failed container


======================================================================================================================================================================================================


oc describe pod rook-ceph-osd-1-66647998d4-46trk
Name:         rook-ceph-osd-1-66647998d4-46trk
Namespace:    openshift-storage
Priority:     0
Node:         shberry-test-apr12-oc-l8jhd-worker-eastus2-wzqht/10.0.32.5
Start Time:   Mon, 13 Apr 2020 02:04:58 +0530
Labels:       app=rook-ceph-osd
              ceph-osd-id=1
              ceph.rook.io/pvc=ocs-deviceset-2-0-jpqvv
              failure-domain=ocs-deviceset-2-0-jpqvv
              pod-template-hash=66647998d4
              portable=true
              rook_cluster=openshift-storage
Annotations:  k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.129.2.19"
                    ],
                    "dns": {},
                    "default-route": [
                        "10.129.2.1"
                    ]
                }]
              openshift.io/scc: rook-ceph
Status:       Running
IP:           10.129.2.19
IPs:
  IP:           10.129.2.19
Controlled By:  ReplicaSet/rook-ceph-osd-1-66647998d4
Init Containers:
  config-init:
    Container ID:  cri-o://91bf823b9be58d8fed0f88612d3c5def7232d6137b6c061c33e0beddeaac76de
    Image:         quay.io/rhceph-dev/rook-ceph@sha256:8dee92b1f069fe7d5a00d4427a56b15f55034d58013e0f30bb68859bbc608914
    Image ID:      quay.io/rhceph-dev/rook-ceph@sha256:8dee92b1f069fe7d5a00d4427a56b15f55034d58013e0f30bb68859bbc608914
    Port:          <none>
    Host Port:     <none>
    Args:
      ceph
      osd
      init
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 13 Apr 2020 02:05:01 +0530
      Finished:     Mon, 13 Apr 2020 02:05:01 +0530
    Ready:          True
    Restart Count:  0
    Environment:
      ROOK_NODE_NAME:               ocs-deviceset-2-0-jpqvv
      ROOK_CLUSTER_ID:              d3109e5c-4da4-475d-87ac-92bcf0ac69a3
      ROOK_PRIVATE_IP:               (v1:status.podIP)
      ROOK_PUBLIC_IP:                (v1:status.podIP)
      ROOK_CLUSTER_NAME:            openshift-storage
      ROOK_MON_ENDPOINTS:           <set to the key 'data' of config map 'rook-ceph-mon-endpoints'>  Optional: false
      ROOK_MON_SECRET:              <set to the key 'mon-secret' in secret 'rook-ceph-mon'>          Optional: false
      ROOK_ADMIN_SECRET:            <set to the key 'admin-secret' in secret 'rook-ceph-mon'>        Optional: false
      ROOK_CONFIG_DIR:              /var/lib/rook
      ROOK_CEPH_CONFIG_OVERRIDE:    /etc/rook/config/override.conf
      ROOK_FSID:                    <set to the key 'fsid' in secret 'rook-ceph-mon'>  Optional: false
      NODE_NAME:                     (v1:spec.nodeName)
      ROOK_CRUSHMAP_HOSTNAME:       ocs-deviceset-2-0-jpqvv
      CEPH_VOLUME_DEBUG:            1
      CEPH_VOLUME_SKIP_RESTORECON:  1
      DM_DISABLE_UDEV:              1
      TINI_SUBREAPER:               
      ROOK_OSD_ID:                  1
      ROOK_CEPH_VERSION:            ceph version 14.2.4-125 nautilus
      ROOK_IS_DEVICE:               true
    Mounts:
      /etc/ceph from rook-config-override (ro)
      /var/lib/ceph/crash from rook-ceph-crash (rw)
      /var/lib/rook from rook-data (rw)
      /var/log/ceph from rook-ceph-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
  copy-bins:
    Container ID:  cri-o://5ff768b0296a59ba0f79d1285434717b23dc6039c57954cb737a331dc9f8a747
    Image:         quay.io/rhceph-dev/rook-ceph@sha256:8dee92b1f069fe7d5a00d4427a56b15f55034d58013e0f30bb68859bbc608914
    Image ID:      quay.io/rhceph-dev/rook-ceph@sha256:8dee92b1f069fe7d5a00d4427a56b15f55034d58013e0f30bb68859bbc608914
    Port:          <none>
    Host Port:     <none>
    Args:
      copy-binaries
      --copy-to-dir
      /rook
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 13 Apr 2020 02:05:01 +0530
      Finished:     Mon, 13 Apr 2020 02:05:02 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /rook from rook-binaries (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
  chown-container-data-dir:
    Container ID:  cri-o://5c169d3aef7c97f13f9123fc1d5fcfd20be97679ca71a854d27767ad4252c4ab
    Image:         quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Image ID:      quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Port:          <none>
    Host Port:     <none>
    Command:
      chown
    Args:
      --verbose
      --recursive
      ceph:ceph
      /var/log/ceph
      /var/lib/ceph/crash
      /var/lib/rook/osd1
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 13 Apr 2020 02:05:02 +0530
      Finished:     Mon, 13 Apr 2020 02:05:02 +0530
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  8Gi
    Requests:
      cpu:        1
      memory:     4Gi
    Environment:  <none>
    Mounts:
      /dev from devices (rw)
      /etc/ceph from rook-config-override (ro)
      /mnt from ocs-deviceset-2-0-jpqvv-bridge (rw)
      /rook from rook-binaries (rw)
      /run/udev from run-udev (rw)
      /var/lib/ceph/crash from rook-ceph-crash (rw)
      /var/lib/rook from rook-data (rw)
      /var/log/ceph from rook-ceph-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
  blkdevmapper:
    Container ID:  cri-o://59d204233a12a2e0802b6c33c4e48e4ae90c6aaa84a3c2c9173a3048eb0ff22b
    Image:         quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Image ID:      quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -a
      /ocs-deviceset-2-0-jpqvv
      /mnt/ocs-deviceset-2-0-jpqvv
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 13 Apr 2020 02:05:03 +0530
      Finished:     Mon, 13 Apr 2020 02:05:03 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt from ocs-deviceset-2-0-jpqvv-bridge (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
    Devices:
      /ocs-deviceset-2-0-jpqvv from ocs-deviceset-2-0-jpqvv
Containers:
  osd:
    Container ID:  cri-o://cc0312c264277bb2e5ecc666695c0dd88ad092230c1ca58a025fc0b000fbd9ad
    Image:         quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Image ID:      quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Port:          <none>
    Host Port:     <none>
    Command:
      /rook/tini
    Args:
      --
      /rook/rook
      ceph
      osd
      start
      --
      --foreground
      --id
      1
      --fsid
      45ff4fa7-1275-427b-8934-05ebaab49beb
      --cluster
      ceph
      --setuser
      ceph
      --setgroup
      ceph
      --setuser-match-path
      /var/lib/rook/osd1
      --crush-location=root=default host=ocs-deviceset-2-0-jpqvv region=eastus zone=eastus-2
      --default-log-to-file
      false
      --ms-learn-addr-from-peer=false
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Message:      failed to deactivate volume group for lv "/dev/ceph-b7e1cff9-b7c1-442a-a75b-3b00a8f31842/osd-block-80bc9029-b8a4-4604-bf09-c28fbe9ca58f": Failed to complete '': exit status 5. 
      Exit Code:    1
      Started:      Fri, 24 Apr 2020 14:16:40 +0530
      Finished:     Fri, 24 Apr 2020 14:16:40 +0530
    Ready:          False
    Restart Count:  83
    Limits:
      cpu:     2
      memory:  8Gi
    Requests:
      cpu:     1
      memory:  4Gi
    Environment:
      ROOK_NODE_NAME:               ocs-deviceset-2-0-jpqvv
      ROOK_CLUSTER_ID:              d3109e5c-4da4-475d-87ac-92bcf0ac69a3
      ROOK_PRIVATE_IP:               (v1:status.podIP)
      ROOK_PUBLIC_IP:                (v1:status.podIP)
      ROOK_CLUSTER_NAME:            openshift-storage
      ROOK_MON_ENDPOINTS:           <set to the key 'data' of config map 'rook-ceph-mon-endpoints'>  Optional: false
      ROOK_MON_SECRET:              <set to the key 'mon-secret' in secret 'rook-ceph-mon'>          Optional: false
      ROOK_ADMIN_SECRET:            <set to the key 'admin-secret' in secret 'rook-ceph-mon'>        Optional: false
      ROOK_CONFIG_DIR:              /var/lib/rook
      ROOK_CEPH_CONFIG_OVERRIDE:    /etc/rook/config/override.conf
      ROOK_FSID:                    <set to the key 'fsid' in secret 'rook-ceph-mon'>  Optional: false
      NODE_NAME:                     (v1:spec.nodeName)
      ROOK_CRUSHMAP_HOSTNAME:       ocs-deviceset-2-0-jpqvv
      CEPH_VOLUME_DEBUG:            1
      CEPH_VOLUME_SKIP_RESTORECON:  1
      DM_DISABLE_UDEV:              1
      TINI_SUBREAPER:               
      CONTAINER_IMAGE:              quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
      POD_NAME:                     rook-ceph-osd-1-66647998d4-46trk (v1:metadata.name)
      POD_NAMESPACE:                openshift-storage (v1:metadata.namespace)
      NODE_NAME:                     (v1:spec.nodeName)
      POD_MEMORY_LIMIT:             8589934592 (limits.memory)
      POD_MEMORY_REQUEST:           4294967296 (requests.memory)
      POD_CPU_LIMIT:                2 (limits.cpu)
      POD_CPU_REQUEST:              1 (requests.cpu)
      ROOK_OSD_UUID:                80bc9029-b8a4-4604-bf09-c28fbe9ca58f
      ROOK_OSD_ID:                  1
      ROOK_OSD_STORE_TYPE:          bluestore
      ROOK_CEPH_MON_HOST:           <set to the key 'mon_host' in secret 'rook-ceph-config'>  Optional: false
      CEPH_ARGS:                    -m $(ROOK_CEPH_MON_HOST)
      ROOK_PVC_BACKED_OSD:          true
      ROOK_LV_PATH:                 /dev/ceph-b7e1cff9-b7c1-442a-a75b-3b00a8f31842/osd-block-80bc9029-b8a4-4604-bf09-c28fbe9ca58f
      ROOK_LV_BACKED_PV:            false
    Mounts:
      /dev from devices (rw)
      /etc/ceph from rook-config-override (ro)
      /mnt from ocs-deviceset-2-0-jpqvv-bridge (rw)
      /rook from rook-binaries (rw)
      /run/udev from run-udev (rw)
      /var/lib/ceph/crash from rook-ceph-crash (rw)
      /var/lib/rook from rook-data (rw)
      /var/log/ceph from rook-ceph-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-osd-token-blwbf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  rook-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/rook
    HostPathType:  
  rook-config-override:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rook-config-override
    Optional:  false
  rook-ceph-log:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/rook/openshift-storage/log
    HostPathType:  
  rook-ceph-crash:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/rook/openshift-storage/crash
    HostPathType:  
  devices:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  
  ocs-deviceset-2-0-jpqvv:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  ocs-deviceset-2-0-jpqvv
    ReadOnly:   false
  ocs-deviceset-2-0-jpqvv-bridge:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  run-udev:
    Type:          HostPath (bare host directory volume)
    Path:          /run/udev
    HostPathType:  
  rook-binaries:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  rook-ceph-osd-token-blwbf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rook-ceph-osd-token-blwbf
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
                 node.ocs.openshift.io/storage=true:NoSchedule
Events:
  Type     Reason   Age                    From                                                       Message
  ----     ------   ----                   ----                                                       -------
  Normal   Pulled   29m (x79 over 11d)     kubelet, shberry-test-apr12-oc-l8jhd-worker-eastus2-wzqht  Container image "quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d" already present on machine
  Warning  BackOff  4m (x1841 over 6h44m)  kubelet, shberry-test-apr12-oc-l8jhd-worker-eastus2-wzqht  Back-off restarting failed container

Comment 3 Travis Nielsen 2020-04-24 17:26:33 UTC
Restarting the node likely caused the LVs to be activated again where they may have been affected by the changing cache policy. 

It would be good to know if this issue can repro in OCS 4.5 when available (or you can use Rook v1.3 upstream today) where LVM will no longer be used for the OSDs on PVCs.

Comment 4 Sébastien Han 2020-05-05 14:19:10 UTC
Probably the same root cause as https://bugzilla.redhat.com/show_bug.cgi?id=1830702 and fixed. It will be in the next 4.4 build.

Comment 5 Travis Nielsen 2020-05-08 00:14:43 UTC
This would already in the latest 4.4 RC build.

Comment 6 Shekhar Berry 2020-05-12 12:15:08 UTC
Hi,

On a new OCP 4.4 setup I deployed OCS 4.4 RC build backed by Azure Platform but the issue still persists. On changing the cache policy in Azure portal couple of my OSD POD went to 'CrashLoopBackOff'. 

A strange behavior is one OSD pod always starts running after cache policy modification.

The details of the setup is as follows:

oc version
Client Version: openshift-clients-4.3.0-201910250623-88-g6a937dfe
Server Version: 4.4.3
Kubernetes Version: v1.17.1

oc get csv -n openshift-storage
NAME                         DISPLAY                       VERSION        REPLACES   PHASE
ocs-operator.v4.4.0-420.ci   OpenShift Container Storage   4.4.0-420.ci              Succeeded

oc get pods -n openshift-storage | grep osd
rook-ceph-osd-0-bfc966977-fsxzg                                   0/1     CrashLoopBackOff   8          52m
rook-ceph-osd-1-779f677b4c-8vlwh                                  0/1     CrashLoopBackOff   8          52m
rook-ceph-osd-2-5d9c74b789-5mjqm                                  1/1     Running            0          52m

Here is the OCS must gather located for your reference: http://perf1.perf.lab.eng.bos.redhat.com/pub/shberry/OCS_on_Azure/ocs_44_azure_must_gather/

Snippet of oc describe pod rook-ceph-osd-0-bfc966977-fsxzg

Containers:
  osd:
    Container ID:  cri-o://83e74e5e8891085fdce6f7c03e4abeb93b363ec7f5675387b72a279b44a435af
    Image:         quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Image ID:      quay.io/rhceph-dev/rhceph@sha256:9e521d33c1b3c7f5899a8a5f36eee423b8003827b7d12d780a58a701d0a64f0d
    Port:          <none>
    Host Port:     <none>
    Command:
      /rook/tini
    Args:
      --
      /rook/rook
      ceph
      osd
      start
      --
      --foreground
      --id
      0
      --fsid
      4248debc-6bd2-4ac9-aa36-c73aede469aa
      --cluster
      ceph
      --setuser
      ceph
      --setgroup
      ceph
      --setuser-match-path
      /var/lib/rook/osd0
      --crush-location=root=default host=ocs-deviceset-1-0-pbl96 region=eastus zone=eastus-2
      --default-log-to-file
      false
      --ms-learn-addr-from-peer=false
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 12 May 2020 17:10:43 +0530
      Finished:     Tue, 12 May 2020 17:10:43 +0530
    Ready:          False
    Restart Count:  9
    Limits:
      cpu:     2
      memory:  8Gi
    Requests:
      cpu:     1
      memory:  4Gi

complete oc describe pod rook-ceph-osd-1-779f677b4c-8vlwh : http://pastebin.test.redhat.com/864038

Comment 7 Travis Nielsen 2020-05-12 14:13:33 UTC
I wonder if this is related to LVM and will be solved in 4.5 anyway with OSDs in raw mode. @Seb wdyt?

Comment 8 Sébastien Han 2020-05-12 16:58:03 UTC
Yes, newer deployments with 4.5 will fix that issue.

Comment 9 Sébastien Han 2020-05-22 09:37:43 UTC
Can you try again with a more recent build? This was fixed with https://github.com/openshift/rook/pull/60. 
Thanks.

Comment 10 Elad 2020-06-01 15:51:18 UTC
QA acking. Test steps are provided in comment #6

Comment 14 Travis Nielsen 2020-06-01 19:34:09 UTC
Moving to ON_QA since the OSDs in raw mode are already included in 4.5 builds.

Comment 16 Martin Bukatovic 2020-08-20 11:40:16 UTC
I performed the scenarion as described in comment #6:

-   Installed OCS on Azure, with 3 worker nodes, one OSD Azure disk per
    worker
-   checked that OCS is running fine (status is ok, all OCS pods are
    running)
-   located OSD Azure disk for each worker VM and set it's **Host
    caching** from **Read-only** to **None**
-   checked status of OSD pods again

I reproduced the bug with OCS 4.4.2 on OCP
4.5.0-0.nightly-2020-08-15-052753:

```
rook-ceph-osd-0-67db8b7b97-x6vlk                                  0/1     CrashLoopBackOff   6          23h
rook-ceph-osd-1-6cfd5dbfb6-wdpn8                                  1/1     Running            0          23h
rook-ceph-osd-2-7f78cc585c-4wvgg                                  0/1     CrashLoopBackOff   6          23h
```

When I retried with OCS 4.5.0-54.ci on OCP
4.5.0-0.nightly-2020-08-20-051434, I see the same behaviour. When the
Host caching is changed for OSD Azure disks, 2 out of 3 OSD pods ends up
in CrashLoopBackOff state:

```
Thu 20 Aug 2020 01:13:44 PM CEST
rook-ceph-osd-0-5f957fc6dc-vzq2g                                  1/1     Running     0          133m
rook-ceph-osd-1-7dd4d46cb9-52tdt                                  1/1     Running     0          133m
rook-ceph-osd-2-858884558c-spswj                                  1/1     Running     0          133m
Thu 20 Aug 2020 01:13:50 PM CEST
rook-ceph-osd-0-5f957fc6dc-vzq2g                                  0/1     CrashLoopBackOff   1          133m
rook-ceph-osd-1-7dd4d46cb9-52tdt                                  1/1     Running            0          133m
rook-ceph-osd-2-858884558c-spswj                                  1/1     Running            0          133m
Thu 20 Aug 2020 01:13:56 PM CEST
rook-ceph-osd-0-5f957fc6dc-vzq2g                                  0/1     CrashLoopBackOff   1          133m
rook-ceph-osd-1-7dd4d46cb9-52tdt                                  1/1     Running            0          133m
rook-ceph-osd-2-858884558c-spswj                                  1/1     Running            0          133m
Thu 20 Aug 2020 01:14:02 PM CEST
rook-ceph-osd-0-5f957fc6dc-vzq2g                                  0/1     CrashLoopBackOff   1          133m
rook-ceph-osd-1-7dd4d46cb9-52tdt                                  1/1     Running            0          133m
rook-ceph-osd-2-858884558c-spswj                                  1/1     Running            0          133m
Thu 20 Aug 2020 01:14:08 PM CEST
rook-ceph-osd-0-5f957fc6dc-vzq2g                                  0/1     Error       2          133m
rook-ceph-osd-1-7dd4d46cb9-52tdt                                  1/1     Running     0          133m
rook-ceph-osd-2-858884558c-spswj                                  1/1     Running     0          133m
Thu 20 Aug 2020 01:14:13 PM CEST
rook-ceph-osd-0-5f957fc6dc-vzq2g                                  0/1     CrashLoopBackOff   2          133m
rook-ceph-osd-1-7dd4d46cb9-52tdt                                  1/1     Running            0          133m
rook-ceph-osd-2-858884558c-spswj                                  1/1     Running            0          133m
Thu 20 Aug 2020 01:14:19 PM CEST
rook-ceph-osd-0-5f957fc6dc-vzq2g                                  0/1     CrashLoopBackOff   2          133m
rook-ceph-osd-1-7dd4d46cb9-52tdt                                  0/1     Error              1          133m
rook-ceph-osd-2-858884558c-spswj                                  1/1     Running            0          133m
Thu 20 Aug 2020 01:14:25 PM CEST
rook-ceph-osd-0-5f957fc6dc-vzq2g                                  0/1     CrashLoopBackOff   2          133m
rook-ceph-osd-1-7dd4d46cb9-52tdt                                  0/1     CrashLoopBackOff   1          133m
rook-ceph-osd-2-858884558c-spswj                                  1/1     Running            0          133m
```

Full version report for OCS 4.5.0-54.ci:

```
cluster channel: stable-4.5
cluster version: 4.5.0-0.nightly-2020-08-20-051434
cluster image: registry.svc.ci.openshift.org/ocp/release@sha256:51f05b5ac9ed21b1be5bc67eba9638351a54932d9d69995ec96a5cb5ba5126dd

storage namespace openshift-cluster-storage-operator
image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:55527efb25dc71aa392b59f269afc5fed6a03af1bb0c2fa78a90cc67ac40342b
 * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:55527efb25dc71aa392b59f269afc5fed6a03af1bb0c2fa78a90cc67ac40342b
image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d0505764aab80d4cc297727f5baea31efd4d8627b5e6f3ebcb6e3c0b82af19b
 * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d0505764aab80d4cc297727f5baea31efd4d8627b5e6f3ebcb6e3c0b82af19b
image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:874c7266607cdf9cd6996d1a3345a493fd13b7f719263bfae3c10ddaf0ae1132
 * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:874c7266607cdf9cd6996d1a3345a493fd13b7f719263bfae3c10ddaf0ae1132

storage namespace openshift-kube-storage-version-migrator
image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df263c82ee7da6142f4cd633b590468005f23e72f61427db3783d0c7b6120b3c
 * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df263c82ee7da6142f4cd633b590468005f23e72f61427db3783d0c7b6120b3c

storage namespace openshift-kube-storage-version-migrator-operator
image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:552c2a0af54aa522e4e7545ce3d6813d7b103aea4a983387bca50a0a1178dc18
 * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:552c2a0af54aa522e4e7545ce3d6813d7b103aea4a983387bca50a0a1178dc18

storage namespace openshift-storage
image quay.io/rhceph-dev/cephcsi@sha256:6f873f8aaa4367ef835f43c35850d7bb86cc971fac7d0949d4079c58cb6728fc
 * quay.io/rhceph-dev/cephcsi@sha256:540c0b93f6d2c76845ebbfa96a728b5eb58f08fd4ec78641ba3d23aaadbfcc0c
image registry.redhat.io/openshift4/ose-csi-driver-registrar@sha256:39930a20d518455a9776fdae1f70945564fec4acd4f028a66ba9f24ee31bf1dc
 * registry.redhat.io/openshift4/ose-csi-driver-registrar@sha256:39930a20d518455a9776fdae1f70945564fec4acd4f028a66ba9f24ee31bf1dc
image registry.redhat.io/openshift4/ose-csi-external-attacher@sha256:74504ef79d8bb8ec3d517bf47ef5513fcd183190915ef55b7e1ddaca1e98d2cc
 * registry.redhat.io/openshift4/ose-csi-external-attacher@sha256:74504ef79d8bb8ec3d517bf47ef5513fcd183190915ef55b7e1ddaca1e98d2cc
image registry.redhat.io/openshift4/ose-csi-external-provisioner-rhel7@sha256:c237b0349c7aba8b3f32f27392f90ad07e1ca4bede000ff3a6dea34253b2278e
 * registry.redhat.io/openshift4/ose-csi-external-provisioner-rhel7@sha256:bbdf56eec860aeeead082f54c7a7685a63d54f230df83216493af5623c1d6498
image registry.redhat.io/openshift4/ose-csi-external-resizer-rhel7@sha256:12f6ed87b8b71443da15faa1c521cfac8fd5defeaf2734fb88c3305d8bd71a3d
 * registry.redhat.io/openshift4/ose-csi-external-resizer-rhel7@sha256:12f6ed87b8b71443da15faa1c521cfac8fd5defeaf2734fb88c3305d8bd71a3d
image quay.io/rhceph-dev/mcg-core@sha256:d2e4edc717533ae0bdede3d8ada917cec06a946e0662b560ffd4493fa1b51f27
 * quay.io/rhceph-dev/mcg-core@sha256:6a511b8d44d9ced96db9156a0b672f85f2424a671c8a2c978e6f52c1d37fe9e2
image registry.redhat.io/rhscl/mongodb-36-rhel7@sha256:ba74027bb4b244df0b0823ee29aa927d729da33edaa20ebdf51a2430cc6b4e95
 * registry.redhat.io/rhscl/mongodb-36-rhel7@sha256:ba74027bb4b244df0b0823ee29aa927d729da33edaa20ebdf51a2430cc6b4e95
image quay.io/rhceph-dev/mcg-operator@sha256:7883296b72541ce63d127cdfa0f92fcdd7d5e977add678365401ac668489c805
 * quay.io/rhceph-dev/mcg-operator@sha256:7883296b72541ce63d127cdfa0f92fcdd7d5e977add678365401ac668489c805
image quay.io/rhceph-dev/ocs-operator@sha256:a25b99a86f0fcabf2289c04495a75788e79f5e750425b8b54c056cfae958900c
 * quay.io/rhceph-dev/ocs-operator@sha256:2987b6300a63a155e8f20637b28f921804bf74bd34c6dbe1202890268a4a8a95
image quay.io/rhceph-dev/rhceph@sha256:eafd1acb0ada5d7cf93699056118aca19ed7a22e4938411d307ef94048746cc8
 * quay.io/rhceph-dev/rhceph@sha256:3def885ad9e8440c5bd6d5c830dafdd59edf9c9e8cce0042b0f44a5396b5b0f6
image quay.io/rhceph-dev/rook-ceph@sha256:d2a38f84f0c92d5427b41b9ff2b20db69c765291789e3419909d80255b1bbd7b
 * quay.io/rhceph-dev/rook-ceph@sha256:38e5d6daaaef3a933b6e2328efeaf79130011d74a77bc0451429e51d7aeaf3ff
```

Comment 17 Sébastien Han 2020-08-20 15:09:59 UTC
Martin, please provide must-gather logs.

Comment 19 Martin Bukatovic 2020-08-20 15:39:07 UTC
Additional observations
=======================

When the disk cache configuration changes (at about 01:13:50 PM CEST), I see this in dmesg log of one of the workers (mbukatov-bz182756-ddbx5-worker-eastus1-pkhp8):

```
[Thu Aug 20 13:13:47 2020] libceph: osd0 (1)10.129.2.15:6801 socket closed (con state OPEN)
[Thu Aug 20 13:13:47 2020] libceph: osd0 (1)10.129.2.15:6801 socket error on write
[Thu Aug 20 13:13:47 2020] libceph: osd0 down
[Thu Aug 20 13:13:47 2020] scsi 5:0:0:1: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
[Thu Aug 20 13:13:47 2020] sd 5:0:0:1: Attached scsi generic sg4 type 0
[Thu Aug 20 13:13:47 2020] sd 5:0:0:1: [sde] 1073741824 512-byte logical blocks: (550 GB/512 GiB)
[Thu Aug 20 13:13:47 2020] sd 5:0:0:1: [sde] 4096-byte physical blocks
[Thu Aug 20 13:13:47 2020] sd 5:0:0:1: [sde] Write Protect is off
[Thu Aug 20 13:13:47 2020] sd 5:0:0:1: [sde] Mode Sense: 0f 00 10 00
[Thu Aug 20 13:13:47 2020] sd 5:0:0:1: [sde] Write cache: disabled, read cache: enabled, supports DPO and FUA
[Thu Aug 20 13:13:47 2020] sd 5:0:0:1: [sde] Attached SCSI disk
[Thu Aug 20 13:14:16 2020] libceph: osd1 (1)10.128.2.13:6801 socket closed (con state OPEN)
[Thu Aug 20 13:14:16 2020] libceph: osd1 (1)10.128.2.13:6801 socket error on read
[Thu Aug 20 13:14:17 2020] libceph: osd1 down
[Thu Aug 20 13:18:16 2020] INFO: task jbd2/rbd0-8:55938 blocked for more than 120 seconds.
[Thu Aug 20 13:18:16 2020]       Not tainted 4.18.0-193.14.3.el8_2.x86_64 #1
[Thu Aug 20 13:18:16 2020] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Aug 20 13:18:16 2020] jbd2/rbd0-8     D    0 55938      2 0x80004080
[Thu Aug 20 13:18:16 2020] Call Trace:
[Thu Aug 20 13:18:16 2020]  ? __schedule+0x24f/0x650
[Thu Aug 20 13:18:16 2020]  ? bit_wait_timeout+0x90/0x90
[Thu Aug 20 13:18:16 2020]  schedule+0x2f/0xa0
[Thu Aug 20 13:18:16 2020]  io_schedule+0x12/0x40
[Thu Aug 20 13:18:16 2020]  bit_wait_io+0xd/0x50
[Thu Aug 20 13:18:16 2020]  __wait_on_bit+0x6c/0x80
[Thu Aug 20 13:18:16 2020]  out_of_line_wait_on_bit+0x91/0xb0
[Thu Aug 20 13:18:16 2020]  ? init_wait_var_entry+0x40/0x40
[Thu Aug 20 13:18:16 2020]  jbd2_journal_commit_transaction+0x112c/0x1990 [jbd2]
[Thu Aug 20 13:18:16 2020]  ? finish_task_switch+0x76/0x2b0
[Thu Aug 20 13:18:16 2020]  kjournald2+0xbd/0x270 [jbd2]
[Thu Aug 20 13:18:16 2020]  ? finish_wait+0x80/0x80
[Thu Aug 20 13:18:16 2020]  ? commit_timeout+0x10/0x10 [jbd2]
[Thu Aug 20 13:18:16 2020]  kthread+0x112/0x130
[Thu Aug 20 13:18:16 2020]  ? kthread_flush_work_fn+0x10/0x10
[Thu Aug 20 13:18:16 2020]  ret_from_fork+0x35/0x40
[Thu Aug 20 13:18:16 2020] INFO: task prometheus:57512 blocked for more than 120 seconds.
[Thu Aug 20 13:18:16 2020]       Not tainted 4.18.0-193.14.3.el8_2.x86_64 #1
[Thu Aug 20 13:18:16 2020] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Thu Aug 20 13:18:16 2020] prometheus      D    0 57512  57452 0x00000080
```

The traceback here is a red herring, because it's caused by the unavailability of the Ceph RBD backed PV when prometheus stores monitoring data in this configuration. But maybe the sde related events captured there could be useful?

Comment 20 Michael Adam 2020-08-21 09:58:29 UTC
Azure being TP in 4.5, and this being moved to FailedQA just yesterday,
I think we need to move this out of OCS 4.5.

Should not block 4.5.0 GA imho.

Moving to proposed for a start, to re-evaluate for 4.5.0.

@Raz, @Martin

Comment 21 Martin Bukatovic 2020-08-21 12:20:39 UTC
Since we don't suggest customers to perform this operation, I agree that it doesn't look like a release blocker.

The workaround in this case seems to be to reboot all affected nodes, one by one.

Comment 22 Michael Adam 2020-08-24 12:05:39 UTC
(In reply to Martin Bukatovic from comment #21)
> Since we don't suggest customers to perform this operation, I agree that it
> doesn't look like a release blocker.
> 
> The workaround in this case seems to be to reboot all affected nodes, one by
> one.

Moving to 4.5.z for a start.

Comment 23 Raz Tamir 2020-08-25 06:56:55 UTC
(In reply to Michael Adam from comment #22)
> (In reply to Martin Bukatovic from comment #21)
> > Since we don't suggest customers to perform this operation, I agree that it
> > doesn't look like a release blocker.
> > 
> > The workaround in this case seems to be to reboot all affected nodes, one by
> > one.
> 
> Moving to 4.5.z for a start.

+1 on Martin's reply (comment #21)

Comment 25 Sébastien Han 2020-09-03 13:31:31 UTC
Ok so the issue is the following:

1. disk /dev/sdd is used by the OSD and identified by major and minor "8, 48"
2. rook in its init containers copies the pvc onto the osd location /var/lib/ceph/osd/ceph-0/block (so still idenfied as "8, 48")
2. the cache is changed
3. a new disk appears! basically the copied disk identifier "8, 48" does not exist anymore
4. not the disk is /dev/sde and is obviously different

Unfortunately Kubernetes never re-run the entire deployment, it only restarts the main called "osd" container.
So the osd keeps trying to read /var/lib/ceph/osd/ceph-0/block which points to nothing, orphan fd basically and horribly fails forever.

The problem is that Kubernetes never runs the full deployment, if it did, we would go by the init container sequence again.

I've found a couple of "known" kubernetes issues about this: https://github.com/kubernetes/kubernetes/issues/52345 and a KEP was open but discontinued https://github.com/kubernetes/enhancements/issues/871.

So it looks like we don't have a good way to fix this now.
I'd suggest adding a doc procedure for this operation.

Basically the procedure will be:

1. Identify the disk to change the cache on
2. Identify the OSD using this disk, this can be done
3. scale down the osd deployment to 0
4. change the cache
5. scale up the osd deployment back to 1

Martin, Shekhar, are we fine moving this as a doc procedure?

@Erin, the doc text should be more like: "when the disk caching strategy is changed on a running OSD, the osd deployment will start failing as the disk disappeared".

Comment 27 Martin Bukatovic 2020-09-15 09:08:31 UTC
(In reply to leseb from comment #25)
> So it looks like we don't have a good way to fix this now.
> I'd suggest adding a doc procedure for this operation.
> 
> Basically the procedure will be:
> 
> 1. Identify the disk to change the cache on
> 2. Identify the OSD using this disk, this can be done
> 3. scale down the osd deployment to 0
> 4. change the cache
> 5. scale up the osd deployment back to 1
> 
> Martin, Shekhar, are we fine moving this as a doc procedure?

Yes, given the fact that this require a fix in OCP, and that this is
a special case, the plan and proposed procedure looks reasonable to me.

That said, to make sure we won't miss the issue in openshift/k8s, I cloned
the bug into OCP as BZ 1879029.

Thank you for debugging this.

> @Erin, the doc text should be more like: "when the disk caching strategy is
> changed on a running OSD, the osd deployment will start failing as the disk
> disappeared".

Comment 28 Sébastien Han 2020-09-21 14:54:06 UTC
Moving to 4.7 since it might take some time for the OCP RFE to be available.

Comment 29 Mudit Agarwal 2020-09-22 04:06:31 UTC
So, this BZ is a tracker for the OCP BZ and no code changes are required in OCS?

Comment 30 Sébastien Han 2020-09-22 07:34:15 UTC
Mudit, we might have code to do eventually once the RFE is implemented on OCP.

Comment 31 Mudit Agarwal 2020-09-22 08:24:50 UTC
Thanks Seb

Removing 4.5.z for now, we can retarget (the relevant release) once this is fixed.

Comment 32 Martin Bukatovic 2020-10-23 16:33:25 UTC
Since BZ 1879029 was closed as NOTABUG, could you provide your suggestion how to approach this from OCS's perspective?

Comment 33 Sébastien Han 2020-11-27 09:51:22 UTC
I'm trying to move forward with that, given that our RFE was rejected by the OCP team, I'm moving this to a documentation bug.
For the doc team, please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1827569#c25 for the workaround.

Comment 36 Martin Bukatovic 2021-02-11 21:37:45 UTC
Based on the above comment 35, I'm retracting qa ack.

Comment 37 Martin Bukatovic 2021-02-11 21:39:24 UTC
And moving the component back to engineering for reevaluation.

Comment 38 Sébastien Han 2021-02-12 09:22:34 UTC
If we cannot manipulate the disk when it is detached, this becomes really difficult.
At this point, it's more an Azure issue than a Rook one, although it has never been a problem with Rook in the first place.

As a variant of the scale down, you could patch the OSD deployment to prevent the OSD process from running so we avoid crashes.

So it will look like this:

1. save the osd deployment spec
2. patch the "osd" container command to "sleep"
3. the disk will still be attached
4. change the cache
5. scale down the deployment to 0
6. re-patch the "osd" container command with the original line (get it from the saved spec in step 1)

Let's see if this works.
Thanks Martin.

Comment 39 Travis Nielsen 2021-02-15 16:43:29 UTC
Moving to ON_QA since instructions have been provided again.

Comment 40 Elad 2021-02-16 16:10:15 UTC
Hi Travis, Ken, 

I assume the but was moved to ON_QA based on a set of instructions for how to get out of the situation raised in this bug.
The intention was to add a warning in the Azure deploy and manage guides saying clearly that host caching should not be played with post OCS cluster deployment.

Comment 41 Travis Nielsen 2021-02-16 19:38:51 UTC
Agreed, this really isn't something we can find to address in the product, so we have to document it. Updating the component for that purpose...

We really need to warn them from changing the caching settings while rook is running. Otherwise, the steps to recover are rather involved as the attempts in comment #25 and #38 explain.

Comment 42 Elad 2021-02-16 20:33:51 UTC
Thanks Travis.
I actually filed a separate BZ about the inclusion of the warning following a discussion about this in the platforms DFG call. See bug 1929328.
I am moving it back to Rook. I am not very sure about the target release and state though.
What is the intention of having it as ON_QA? Shouldn't we track the actual code fix as part of this bug? If yes, I suggest moving it back to ASSIGNED and re-targeting

Comment 43 Travis Nielsen 2021-02-16 21:03:35 UTC
@Elad There is no known or planned code fix for this. If we have a separate doc BZ, shall we just close it? Or did I miss something else in this BZ?

Comment 44 Elad 2021-02-17 15:27:21 UTC
I think we can track the instructions suggested in comment #25 to be added to the docs. 
\

Comment 47 Sébastien Han 2021-02-22 09:53:22 UTC
The OCP proposed RFE was rejected in https://bugzilla.redhat.com/show_bug.cgi?id=1879029.
There is nothing Rook can do at the moment.

I'm adding one more instruction as part of the procedure:

1. save the osd deployment spec
2. remove liveness probe: oc patch deployment rook-ceph-osd-0 -p '[{"op":"remove", "path":"/spec/template/spec/containers/0/livenessProbe"}]'
3. patch the "osd" container command to "sleep": oc patch deployment rook-ceph-osd-0 -p '{"spec": {"template": {"spec": {"containers": [{"name": "mon", "command": ["sleep", "infinity"], "args": []}]}}}}'
4. the disk will still be attached
5. change the cache
6. scale down the deployment to 0
7. re-patch the "osd" container command with the original line (get it from the saved spec in step 1)

Moving again ON_QA for testing the procedure.

Comment 51 Elad 2021-03-01 11:31:53 UTC
(In reply to Sébastien Han from comment #47)

> Moving again ON_QA for testing the procedure.

Hi Sebastien, in that case, since there is not engineering work, shouldn't we track the instructions as part of a docs BZ? also, where will the user find these instructions? I guess we need to either add them to the official docs or create a KCS

Rejy, Anjana, Bipin, what do you think?

Comment 52 Rejy M Cyriac 2021-03-02 17:33:02 UTC
(In reply to Elad from comment #51)
> (In reply to Sébastien Han from comment #47)
> 
> > Moving again ON_QA for testing the procedure.
> 
> Hi Sebastien, in that case, since there is not engineering work, shouldn't
> we track the instructions as part of a docs BZ? also, where will the user
> find these instructions? I guess we need to either add them to the official
> docs or create a KCS
> 
> Rejy, Anjana, Bipin, what do you think?


Looks to be information that needs to be put in a troubleshooting guide, or into a KCS.
I recommend that we open another Documentation BZ to track that work, and reference this engineering BZ there.

As for this Engineering BZ, it may be CLOSED WONTFIX/CANTFIX , with reference provided to the rejected RFE request at OCP.

Comment 53 Sébastien Han 2021-03-03 10:05:18 UTC
Thanks Rejy,

I'm closing this as per https://bugzilla.redhat.com/show_bug.cgi?id=1827569#c52.
An RFE BZ was raised in https://bugzilla.redhat.com/show_bug.cgi?id=1879029 and rejected to CANTFIX.

Thanks.

Comment 54 Martin Bukatovic 2021-06-04 10:52:20 UTC
Dropping needinfo request to test proposed procedure from comment 38 and comment 47, since there is no intention to qualify such procedure at the moment.

Comment 55 Red Hat Bugzilla 2023-09-15 00:31:14 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.