Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1927370 Details for
Bug 2147526
After performing drive detach and attach, the OSD count does not increase
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh92 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
oc describe for one osd pod which is in crashloopback state
ocdescribepods_osd.txt (text/plain), 20.33 KB, created by
Bhavana
on 2022-11-25 07:01:02 UTC
(
hide
)
Description:
oc describe for one osd pod which is in crashloopback state
Filename:
MIME Type:
Creator:
Bhavana
Created:
2022-11-25 07:01:02 UTC
Size:
20.33 KB
patch
obsolete
>Name: rook-ceph-osd-8-565bb8bc89-hx96t >Namespace: openshift-storage >Priority: 2000001000 >Priority Class Name: system-node-critical >Node: worker3.sd1.openshift.fm/192.168.7.208 >Start Time: Wed, 23 Nov 2022 16:58:57 +0000 >Labels: app=rook-ceph-osd > app.kubernetes.io/component=cephclusters.ceph.rook.io > app.kubernetes.io/created-by=rook-ceph-operator > app.kubernetes.io/instance=8 > app.kubernetes.io/managed-by=rook-ceph-operator > app.kubernetes.io/name=ceph-osd > app.kubernetes.io/part-of=ocs-storagecluster-cephcluster > ceph-osd-id=8 > ceph-version=16.2.8-84 > ceph.rook.io/DeviceSet=ocs-deviceset-0 > ceph.rook.io/pvc=ocs-deviceset-0-data-6gm2pc > ceph_daemon_id=8 > ceph_daemon_type=osd > failure-domain=ocs-deviceset-0-data-6gm2pc > osd=8 > pod-template-hash=565bb8bc89 > portable=true > rook-version=v4.11.3-0.224a35508091e5dcf8f09dd910118b75ef52f84e > rook.io/operator-namespace=openshift-storage > rook_cluster=openshift-storage > topology-location-host=ocs-deviceset-0-data-6gm2pc > topology-location-rack=rack1 > topology-location-root=default >Annotations: k8s.v1.cni.cncf.io/network-status: > [{ > "name": "openshift-sdn", > "interface": "eth0", > "ips": [ > "10.131.0.37" > ], > "default": true, > "dns": {} > }] > k8s.v1.cni.cncf.io/networks-status: > [{ > "name": "openshift-sdn", > "interface": "eth0", > "ips": [ > "10.131.0.37" > ], > "default": true, > "dns": {} > }] > openshift.io/scc: rook-ceph >Status: Pending >IP: 10.131.0.37 >IPs: > IP: 10.131.0.37 >Controlled By: ReplicaSet/rook-ceph-osd-8-565bb8bc89 >Init Containers: > blkdevmapper: > Container ID: cri-o://7a644225386065993ab14de5f71a82886e136178b2aa2f42c08f902c5dc87d68 > Image: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c > Image ID: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:03c87d18494d1d1796c8729871c001057d0fa19826a672d1bc34da6609e551ba > Port: <none> > Host Port: <none> > Command: > /bin/bash > -c > > set -xe > > PVC_SOURCE=/ocs-deviceset-0-data-6gm2pc > PVC_DEST=/var/lib/ceph/osd/ceph-8/block > CP_ARGS=(--archive --dereference --verbose) > > if [ -b "$PVC_DEST" ]; then > PVC_SOURCE_MAJ_MIN=$(stat --format '%t%T' $PVC_SOURCE) > PVC_DEST_MAJ_MIN=$(stat --format '%t%T' $PVC_DEST) > if [[ "$PVC_SOURCE_MAJ_MIN" == "$PVC_DEST_MAJ_MIN" ]]; then > echo "PVC $PVC_DEST already exists and has the same major and minor as $PVC_SOURCE: "$PVC_SOURCE_MAJ_MIN"" > exit 0 > else > echo "PVC's source major/minor numbers changed" > CP_ARGS+=(--remove-destination) > fi > fi > > cp "${CP_ARGS[@]}" "$PVC_SOURCE" "$PVC_DEST" > > State: Terminated > Reason: Completed > Exit Code: 0 > Started: Wed, 23 Nov 2022 16:58:59 +0000 > Finished: Wed, 23 Nov 2022 16:58:59 +0000 > Ready: True > Restart Count: 0 > Limits: > cpu: 2 > memory: 5Gi > Requests: > cpu: 2 > memory: 5Gi > Environment: <none> > Mounts: > /var/lib/ceph/osd/ceph-8 from ocs-deviceset-0-data-6gm2pc-bridge (rw,path="ceph-8") > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tftnt (ro) > Devices: > /ocs-deviceset-0-data-6gm2pc from ocs-deviceset-0-data-6gm2pc > blkdevmapper-metadata: > Container ID: cri-o://ccf23642ab6583f25ce99a5929e4e8fe6327f5e18802ddc26158f6952caaf87c > Image: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c > Image ID: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:03c87d18494d1d1796c8729871c001057d0fa19826a672d1bc34da6609e551ba > Port: <none> > Host Port: <none> > Command: > /bin/bash > -c > > set -xe > > PVC_SOURCE=/ocs-deviceset-0-metadata-6f4z9s > PVC_DEST=/var/lib/ceph/osd/ceph-8/block.db > CP_ARGS=(--archive --dereference --verbose) > > if [ -b "$PVC_DEST" ]; then > PVC_SOURCE_MAJ_MIN=$(stat --format '%t%T' $PVC_SOURCE) > PVC_DEST_MAJ_MIN=$(stat --format '%t%T' $PVC_DEST) > if [[ "$PVC_SOURCE_MAJ_MIN" == "$PVC_DEST_MAJ_MIN" ]]; then > echo "PVC $PVC_DEST already exists and has the same major and minor as $PVC_SOURCE: "$PVC_SOURCE_MAJ_MIN"" > exit 0 > else > echo "PVC's source major/minor numbers changed" > CP_ARGS+=(--remove-destination) > fi > fi > > cp "${CP_ARGS[@]}" "$PVC_SOURCE" "$PVC_DEST" > > State: Terminated > Reason: Completed > Exit Code: 0 > Started: Wed, 23 Nov 2022 16:58:59 +0000 > Finished: Wed, 23 Nov 2022 16:58:59 +0000 > Ready: True > Restart Count: 0 > Limits: > cpu: 2 > memory: 5Gi > Requests: > cpu: 2 > memory: 5Gi > Environment: <none> > Mounts: > /var/lib/ceph/osd/ceph-8 from ocs-deviceset-0-data-6gm2pc-bridge (rw,path="ceph-8") > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tftnt (ro) > Devices: > /ocs-deviceset-0-metadata-6f4z9s from ocs-deviceset-0-metadata-6f4z9s > blkdevmapper-wal: > Container ID: cri-o://956888b0be34e2fdbc0317b85dffe91629bafad58d20e6e17d222f4af9651ee1 > Image: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c > Image ID: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:03c87d18494d1d1796c8729871c001057d0fa19826a672d1bc34da6609e551ba > Port: <none> > Host Port: <none> > Command: > /bin/bash > -c > > set -xe > > PVC_SOURCE=/ocs-deviceset-0-wal-65gnbk > PVC_DEST=/var/lib/ceph/osd/ceph-8/block.wal > CP_ARGS=(--archive --dereference --verbose) > > if [ -b "$PVC_DEST" ]; then > PVC_SOURCE_MAJ_MIN=$(stat --format '%t%T' $PVC_SOURCE) > PVC_DEST_MAJ_MIN=$(stat --format '%t%T' $PVC_DEST) > if [[ "$PVC_SOURCE_MAJ_MIN" == "$PVC_DEST_MAJ_MIN" ]]; then > echo "PVC $PVC_DEST already exists and has the same major and minor as $PVC_SOURCE: "$PVC_SOURCE_MAJ_MIN"" > exit 0 > else > echo "PVC's source major/minor numbers changed" > CP_ARGS+=(--remove-destination) > fi > fi > > cp "${CP_ARGS[@]}" "$PVC_SOURCE" "$PVC_DEST" > > State: Terminated > Reason: Completed > Exit Code: 0 > Started: Wed, 23 Nov 2022 16:59:00 +0000 > Finished: Wed, 23 Nov 2022 16:59:00 +0000 > Ready: True > Restart Count: 0 > Limits: > cpu: 2 > memory: 5Gi > Requests: > cpu: 2 > memory: 5Gi > Environment: <none> > Mounts: > /var/lib/ceph/osd/ceph-8 from ocs-deviceset-0-data-6gm2pc-bridge (rw,path="ceph-8") > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tftnt (ro) > Devices: > /ocs-deviceset-0-wal-65gnbk from ocs-deviceset-0-wal-65gnbk > activate: > Container ID: cri-o://3a685779b589724192f75e0457615f6e655f379c4200af4c4e64d16502cf9e97 > Image: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c > Image ID: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:03c87d18494d1d1796c8729871c001057d0fa19826a672d1bc34da6609e551ba > Port: <none> > Host Port: <none> > Command: > ceph-bluestore-tool > Args: > prime-osd-dir > --dev > /var/lib/ceph/osd/ceph-8/block > --path > /var/lib/ceph/osd/ceph-8 > --no-mon-config > State: Terminated > Reason: Completed > Exit Code: 0 > Started: Wed, 23 Nov 2022 16:59:01 +0000 > Finished: Wed, 23 Nov 2022 16:59:01 +0000 > Ready: True > Restart Count: 0 > Limits: > cpu: 2 > memory: 5Gi > Requests: > cpu: 2 > memory: 5Gi > Environment: <none> > Mounts: > /var/lib/ceph/osd/ceph-8 from ocs-deviceset-0-data-6gm2pc-bridge (rw,path="ceph-8") > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tftnt (ro) > Devices: > /var/lib/ceph/osd/ceph-8/block from ocs-deviceset-0-data-6gm2pc > expand-bluefs: > Container ID: cri-o://fa3923b127b376e881190007227c77dbbd134c5fc75a8484472c742d2133aee7 > Image: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c > Image ID: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:03c87d18494d1d1796c8729871c001057d0fa19826a672d1bc34da6609e551ba > Port: <none> > Host Port: <none> > Command: > ceph-bluestore-tool > Args: > bluefs-bdev-expand > --path > /var/lib/ceph/osd/ceph-8 > State: Waiting > Reason: CrashLoopBackOff > Last State: Terminated > Reason: Error > Exit Code: 134 > Started: Fri, 25 Nov 2022 06:23:35 +0000 > Finished: Fri, 25 Nov 2022 06:23:37 +0000 > Ready: False > Restart Count: 442 > Limits: > cpu: 2 > memory: 5Gi > Requests: > cpu: 2 > memory: 5Gi > Environment: <none> > Mounts: > /var/lib/ceph/osd/ceph-8 from ocs-deviceset-0-data-6gm2pc-bridge (rw,path="ceph-8") > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tftnt (ro) > chown-container-data-dir: > Container ID: > Image: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c > Image ID: > Port: <none> > Host Port: <none> > Command: > chown > Args: > --verbose > --recursive > ceph:ceph > /var/log/ceph > /var/lib/ceph/crash > /var/lib/ceph/osd/ceph-8 > State: Waiting > Reason: PodInitializing > Ready: False > Restart Count: 0 > Limits: > cpu: 2 > memory: 5Gi > Requests: > cpu: 2 > memory: 5Gi > Environment: <none> > Mounts: > /etc/ceph from rook-config-override (ro) > /run/udev from run-udev (rw) > /var/lib/ceph/crash from rook-ceph-crash (rw) > /var/lib/ceph/osd/ceph-8 from ocs-deviceset-0-data-6gm2pc-bridge (rw,path="ceph-8") > /var/lib/rook from rook-data (rw) > /var/log/ceph from rook-ceph-log (rw) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tftnt (ro) >Containers: > osd: > Container ID: > Image: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c > Image ID: > Port: <none> > Host Port: <none> > Command: > ceph-osd > Args: > --foreground > --id > 8 > --fsid > b81b4eba-e3e2-4d7d-bc26-a75e8d29c7f3 > --setuser > ceph > --setgroup > ceph > --crush-location=root=default host=ocs-deviceset-0-data-6gm2pc rack=rack1 > --log-to-stderr=true > --err-to-stderr=true > --mon-cluster-log-to-stderr=true > --log-stderr-prefix=debug > --default-log-to-file=false > --default-mon-cluster-log-to-file=false > --ms-learn-addr-from-peer=false > State: Waiting > Reason: PodInitializing > Ready: False > Restart Count: 0 > Limits: > cpu: 2 > memory: 5Gi > Requests: > cpu: 2 > memory: 5Gi > Liveness: exec [env -i sh -c ceph --admin-daemon /run/ceph/ceph-osd.8.asok status] delay=10s timeout=2s period=10s #success=1 #failure=3 > Startup: exec [env -i sh -c ceph --admin-daemon /run/ceph/ceph-osd.8.asok status] delay=10s timeout=2s period=10s #success=1 #failure=720 > Environment Variables from: > rook-ceph-osd-env-override ConfigMap Optional: true > Environment: > ROOK_NODE_NAME: ocs-deviceset-0-data-6gm2pc > ROOK_CLUSTER_ID: 78b24f8c-bc52-412b-ae19-704482a013e5 > ROOK_CLUSTER_NAME: ocs-storagecluster-cephcluster > ROOK_PRIVATE_IP: (v1:status.podIP) > ROOK_PUBLIC_IP: (v1:status.podIP) > POD_NAMESPACE: openshift-storage > ROOK_MON_ENDPOINTS: <set to the key 'data' of config map 'rook-ceph-mon-endpoints'> Optional: false > ROOK_MON_SECRET: <set to the key 'mon-secret' in secret 'rook-ceph-mon'> Optional: false > ROOK_CEPH_USERNAME: <set to the key 'ceph-username' in secret 'rook-ceph-mon'> Optional: false > ROOK_CEPH_SECRET: <set to the key 'ceph-secret' in secret 'rook-ceph-mon'> Optional: false > ROOK_CONFIG_DIR: /var/lib/rook > ROOK_CEPH_CONFIG_OVERRIDE: /etc/rook/config/override.conf > ROOK_FSID: <set to the key 'fsid' in secret 'rook-ceph-mon'> Optional: false > NODE_NAME: (v1:spec.nodeName) > ROOK_CRUSHMAP_ROOT: default > ROOK_CRUSHMAP_HOSTNAME: ocs-deviceset-0-data-6gm2pc > CEPH_VOLUME_DEBUG: 1 > CEPH_VOLUME_SKIP_RESTORECON: 1 > DM_DISABLE_UDEV: 1 > CONTAINER_IMAGE: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c > POD_NAME: rook-ceph-osd-8-565bb8bc89-hx96t (v1:metadata.name) > POD_MEMORY_LIMIT: 5368709120 (limits.memory) > POD_MEMORY_REQUEST: 5368709120 (requests.memory) > POD_CPU_LIMIT: 2 (limits.cpu) > POD_CPU_REQUEST: 2 (requests.cpu) > ROOK_OSD_UUID: c4d5b59d-f69f-408d-bff9-abec1a09415b > ROOK_OSD_ID: 8 > ROOK_CEPH_MON_HOST: <set to the key 'mon_host' in secret 'rook-ceph-config'> Optional: false > CEPH_ARGS: -m $(ROOK_CEPH_MON_HOST) > ROOK_BLOCK_PATH: /mnt/ocs-deviceset-0-data-6gm2pc > ROOK_CV_MODE: raw > ROOK_OSD_DEVICE_CLASS: nvme > ROOK_OSD_PVC_SIZE: 2Ti > ROOK_TOPOLOGY_AFFINITY: topology.rook.io/rack=rack1 > ROOK_PVC_BACKED_OSD: true > Mounts: > /etc/ceph from rook-config-override (ro) > /run/udev from run-udev (rw) > /var/lib/ceph/crash from rook-ceph-crash (rw) > /var/lib/ceph/osd/ceph-8 from ocs-deviceset-0-data-6gm2pc-bridge (rw,path="ceph-8") > /var/lib/rook from rook-data (rw) > /var/log/ceph from rook-ceph-log (rw) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tftnt (ro) > log-collector: > Container ID: > Image: registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c > Image ID: > Port: <none> > Host Port: <none> > Command: > /bin/bash > -x > -e > -m > -c > > CEPH_CLIENT_ID=ceph-osd.8 > PERIODICITY=daily > LOG_ROTATE_CEPH_FILE=/etc/logrotate.d/ceph > LOG_MAX_SIZE=524M > > # edit the logrotate file to only rotate a specific daemon log > # otherwise we will logrotate log files without reloading certain daemons > # this might happen when multiple daemons run on the same machine > sed -i "s|*.log|$CEPH_CLIENT_ID.log|" "$LOG_ROTATE_CEPH_FILE" > > # replace default daily with given user input > sed --in-place "s/daily/$PERIODICITY/g" "$LOG_ROTATE_CEPH_FILE" > > if [ "$LOG_MAX_SIZE" != "0" ]; then > # adding maxsize $LOG_MAX_SIZE at the 4th line of the logrotate config file with 4 spaces to maintain indentation > sed --in-place "4i \ \ \ \ maxsize $LOG_MAX_SIZE" "$LOG_ROTATE_CEPH_FILE" > fi > > while true; do > # we don't force the logrorate but we let the logrotate binary handle the rotation based on user's input for periodicity and size > logrotate --verbose "$LOG_ROTATE_CEPH_FILE" > sleep 15m > done > > State: Waiting > Reason: PodInitializing > Ready: False > Restart Count: 0 > Environment: <none> > Mounts: > /etc/ceph from rook-config-override (ro) > /var/lib/ceph/crash from rook-ceph-crash (rw) > /var/log/ceph from rook-ceph-log (rw) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tftnt (ro) >Conditions: > Type Status > Initialized False > Ready False > ContainersReady False > PodScheduled True >Volumes: > rook-data: > Type: EmptyDir (a temporary directory that shares a pod's lifetime) > Medium: > SizeLimit: <unset> > rook-config-override: > Type: Projected (a volume that contains injected data from multiple sources) > ConfigMapName: rook-config-override > ConfigMapOptional: <nil> > rook-ceph-log: > Type: HostPath (bare host directory volume) > Path: /var/lib/rook/openshift-storage/log > HostPathType: > rook-ceph-crash: > Type: HostPath (bare host directory volume) > Path: /var/lib/rook/openshift-storage/crash > HostPathType: > ocs-deviceset-0-data-6gm2pc: > Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) > ClaimName: ocs-deviceset-0-data-6gm2pc > ReadOnly: false > ocs-deviceset-0-data-6gm2pc-bridge: > Type: HostPath (bare host directory volume) > Path: /var/lib/rook/openshift-storage/ocs-deviceset-0-data-6gm2pc > HostPathType: DirectoryOrCreate > ocs-deviceset-0-metadata-6f4z9s: > Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) > ClaimName: ocs-deviceset-0-metadata-6f4z9s > ReadOnly: false > ocs-deviceset-0-metadata-6f4z9s-bridge: > Type: HostPath (bare host directory volume) > Path: /var/lib/rook/openshift-storage/ocs-deviceset-0-metadata-6f4z9s > HostPathType: DirectoryOrCreate > ocs-deviceset-0-wal-65gnbk: > Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) > ClaimName: ocs-deviceset-0-wal-65gnbk > ReadOnly: false > ocs-deviceset-0-wal-65gnbk-bridge: > Type: HostPath (bare host directory volume) > Path: /var/lib/rook/openshift-storage/ocs-deviceset-0-wal-65gnbk > HostPathType: DirectoryOrCreate > run-udev: > Type: HostPath (bare host directory volume) > Path: /run/udev > HostPathType: > kube-api-access-tftnt: > Type: Projected (a volume that contains injected data from multiple sources) > TokenExpirationSeconds: 3607 > ConfigMapName: kube-root-ca.crt > ConfigMapOptional: <nil> > DownwardAPI: true > ConfigMapName: openshift-service-ca.crt > ConfigMapOptional: <nil> >QoS Class: Burstable >Node-Selectors: <none> >Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists > node.kubernetes.io/not-ready:NoExecute op=Exists for 300s > node.kubernetes.io/unreachable:NoExecute op=Exists for 5s > node.ocs.openshift.io/storage=true:NoSchedule >Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal Pulled 176m (x409 over 37h) kubelet Container image "registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7892e9da0a70b2d7e3efd98d2cb980e485f07eddff6a0dac6d6bd6c516914f3c" already present on machine > Warning BackOff 108s (x10264 over 37h) kubelet Back-off restarting failed container
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 2147526
:
1927364
|
1927365
|
1927366
|
1927367
|
1927368
|
1927369
| 1927370 |
1927371
|
1927372
|
1927373
|
1927374
|
1928368
|
1929993
|
1929994
|
1930018
|
1930020
|
1933764
|
1933766