Bug 1887026
| Summary: | FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Archana Prabhakar <aprabhak> |
| Component: | Storage | Assignee: | Jan Safranek <jsafrane> |
| Storage sub component: | Kubernetes | QA Contact: | Qin Ping <piqin> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | unspecified | CC: | aos-bugs, danili, jpoulin, jsafrane, manokuma, mkumatag |
| Version: | 4.6 | ||
| Target Milestone: | --- | ||
| Target Release: | 4.7.0 | ||
| Hardware: | ppc64le | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-02-24 15:24:43 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Archana Prabhakar
2020-10-10 06:40:07 UTC
This is happening due to failure of pattern match of the fcpath in the k8s code, created an issue in the k8s community and submitted a PR for the same: https://github.com/kubernetes/kubernetes/issues/95450 https://github.com/kubernetes/kubernetes/pull/95451 (In reply to mkumatag from comment #1) > This is happening due to failure of pattern match of the fcpath in the k8s > code, created an issue in the k8s community and submitted a PR for the same: > > https://github.com/kubernetes/kubernetes/issues/95450 > https://github.com/kubernetes/kubernetes/pull/95451 This patch got merged and even the cherry-pick https://github.com/kubernetes/kubernetes/pull/95610 for 1.19 release as well. Not sure how to get this into the OCP repository! Thanks for the upstream patch!
> Not sure how to get this into the OCP repository!
We can take it from here and it should get merged soon. Not sure about our QA environment, they may need some help verifying the bugfix once it's merged and part of 4.7 nightly.
(In reply to Jan Safranek from comment #4) > Thanks for the upstream patch! > > > Not sure how to get this into the OCP repository! > > We can take it from here and it should get merged soon. Not sure about our > QA environment, they may need some help verifying the bugfix once it's > merged and part of 4.7 nightly. np, we can help in verifying the issue once it gets into the build. Bug verified on below 4.7 nightly.
[root@fct-arc47-bastion ~]# oc version
Client Version: 4.7.0-0.nightly-ppc64le-2020-11-03-090148
Server Version: 4.7.0-0.nightly-ppc64le-2020-11-03-090148
Kubernetes Version: v1.19.0+74d9cb5
From bastion, create a PV and PVC using the new 5GB data volume attached using its wwn and lun id from output of previous step.
[root@fct-arc47-bastion fc]# cat fcnew-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: fc-new-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
#volumeMode: Block
#persistentVolumeReclaimPolicy: Retain
storageClassName: manual
fc:
targetWWNs: ['5005076802233f81']
lun: 3
readOnly: true
fsType: ext4
[root@fct-arc47-bastion fc]# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
fc-new-pv 5Gi RWO Retain Available manual 2s
[root@fct-arc47-bastion fc]# cat fcnew-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fc-new-pvc
spec:
accessModes:
- ReadWriteOnce
#volumeMode: Block
resources:
requests:
storage: 5Gi
storageClassName: manual
[root@fct-arc47-bastion fc]# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
fc-new-pv 5Gi RWO Retain Bound default/fc-new-pvc manual 2m1s
pvc-9b6c54fd-4dd4-4a1a-97d1-5b0453bc33ce 20Gi RWX Delete Bound openshift-image-registry/registry-pvc nfs-storage-provisioner 5d19h
[root@fct-arc47-bastion fc]# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
fc-new-pvc Bound fc-new-pv 5Gi RWO manual 10s
Create a pod using the above PVC
[root@fct-arc47-bastion fc]# cat fc-new-pvc-pod.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-fc-new-pvc-mnt
spec:
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-fc-new
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: fc-vol
mountPath: /mnt/fc
volumes:
- name: fc-vol
persistentVolumeClaim:
claimName: fc-new-pvc
nodeSelector:
fcnode: wkr
Login to the pod and do the following checks.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
|-sda1 8:1 0 4M 0 part
|-sda2 8:2 0 384M 0 part
`-sda4 8:4 0 119.6G 0 part
sdb 8:16 0 120G 0 disk
|-sdb1 8:17 0 4M 0 part
|-sdb2 8:18 0 384M 0 part
`-sdb4 8:20 0 119.6G 0 part
sdc 8:32 0 120G 0 disk
|-sdc1 8:33 0 4M 0 part
|-sdc2 8:34 0 384M 0 part
`-sdc4 8:36 0 119.6G 0 part
sdd 8:48 0 120G 0 disk
|-sdd1 8:49 0 4M 0 part
|-sdd2 8:50 0 384M 0 part
`-sdd4 8:52 0 119.6G 0 part
`-coreos-luks-root-nocrypt 253:0 0 119.6G 0 dm /dev/termination-log
sde 8:64 0 7G 0 disk
`-mpathb 253:1 0 7G 0 mpath
sdf 8:80 0 7G 0 disk
`-mpathb 253:1 0 7G 0 mpath
sdg 8:96 0 7G 0 disk
`-mpathb 253:1 0 7G 0 mpath
sdh 8:112 0 7G 0 disk
`-mpathb 253:1 0 7G 0 mpath
sdi 8:128 0 4G 0 disk
`-mpathc 253:2 0 4G 0 mpath
sdj 8:144 0 4G 0 disk
`-mpathc 253:2 0 4G 0 mpath
sdk 8:160 0 4G 0 disk
`-mpathc 253:2 0 4G 0 mpath
sdl 8:176 0 4G 0 disk
`-mpathc 253:2 0 4G 0 mpath
sdm 8:192 0 5G 0 disk
`-mpathd 253:3 0 5G 0 mpath /mnt/fc
sdn 8:208 0 5G 0 disk
`-mpathd 253:3 0 5G 0 mpath /mnt/fc
sdo 8:224 0 5G 0 disk
`-mpathd 253:3 0 5G 0 mpath /mnt/fc
sdp 8:240 0 5G 0 disk
`-mpathd 253:3 0 5G 0 mpath /mnt/fc
Ensure that the lsblk shows the new mount path [ /mnt/fc ] mapped to the 5GB disk.
[root@fct-arc47-bastion fc]# oc exec -it nginx-fc-new-pvc-mnt-f46576646-dwgd9 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 120G 16G 104G 14% /
tmpfs 64M 0 64M 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
tmpfs 16G 131M 16G 1% /etc/hostname
/dev/mapper/mpathd 4.9G 20M 4.9G 1% /mnt/fc
/dev/mapper/coreos-luks-root-nocrypt 120G 16G 104G 14% /etc/hosts
tmpfs 16G 256K 16G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 16G 0 16G 0% /proc/scsi
tmpfs 16G 0 16G 0% /sys/firmware
Create a new file in the /mnt/fc path to ensure that the path is writable in the pod.
# cd /mnt/fc
# ls -l
total 16
drwx------. 2 root root 16384 Nov 9 08:07 lost+found
# touch aa
# ls -ltr
total 16
drwx------. 2 root root 16384 Nov 9 08:07 lost+found
-rw-r--r--. 1 root root 0 Nov 9 08:11 aa
# pwd
/mnt/fc
On worker-0, run lsblk to ensure that the 5Gb disk path shows the pod mounted to this FC disk.
[root@worker-0 core]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
|-sda1 8:1 0 4M 0 part
|-sda2 8:2 0 384M 0 part
`-sda4 8:4 0 119.6G 0 part
sdb 8:16 0 120G 0 disk
|-sdb1 8:17 0 4M 0 part
|-sdb2 8:18 0 384M 0 part
`-sdb4 8:20 0 119.6G 0 part
sdc 8:32 0 120G 0 disk
|-sdc1 8:33 0 4M 0 part
|-sdc2 8:34 0 384M 0 part
`-sdc4 8:36 0 119.6G 0 part
sdd 8:48 0 120G 0 disk
|-sdd1 8:49 0 4M 0 part
|-sdd2 8:50 0 384M 0 part /boot
`-sdd4 8:52 0 119.6G 0 part
`-coreos-luks-root-nocrypt 253:0 0 119.6G 0 dm /sysroot
sde 8:64 0 7G 0 disk
`-mpathb 253:1 0 7G 0 mpath
sdf 8:80 0 7G 0 disk
`-mpathb 253:1 0 7G 0 mpath
sdg 8:96 0 7G 0 disk
`-mpathb 253:1 0 7G 0 mpath
sdh 8:112 0 7G 0 disk
`-mpathb 253:1 0 7G 0 mpath
sdi 8:128 0 4G 0 disk
`-mpathc 253:2 0 4G 0 mpath
sdj 8:144 0 4G 0 disk
`-mpathc 253:2 0 4G 0 mpath
sdk 8:160 0 4G 0 disk
`-mpathc 253:2 0 4G 0 mpath
sdl 8:176 0 4G 0 disk
`-mpathc 253:2 0 4G 0 mpath
sdm 8:192 0 5G 0 disk
`-mpathd 253:3 0 5G 0 mpath /var/lib/kubelet/pods/b91334fd-2b22-4518-8913-213854f5a277/volumes/kubernetes.io~fc/fc-new-pv
sdn 8:208 0 5G 0 disk
`-mpathd 253:3 0 5G 0 mpath /var/lib/kubelet/pods/b91334fd-2b22-4518-8913-213854f5a277/volumes/kubernetes.io~fc/fc-new-pv
sdo 8:224 0 5G 0 disk
`-mpathd 253:3 0 5G 0 mpath /var/lib/kubelet/pods/b91334fd-2b22-4518-8913-213854f5a277/volumes/kubernetes.io~fc/fc-new-pv
sdp 8:240 0 5G 0 disk
`-mpathd 253:3 0 5G 0 mpath /var/lib/kubelet/pods/b91334fd-2b22-4518-8913-213854f5a277/volumes/kubernetes.io~fc/fc-new-pv
[root@fct-arc47-bastion ~]# oc describe pv fc-new-pv
Name: fc-new-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/fc-new-pvc
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: FC (a Fibre Channel disk)
TargetWWNs: 5005076802233f81
LUN: 3
FSType: ext4
ReadOnly: true
Events: <none>
[root@fct-arc47-bastion ~]# oc describe pod nginx-fc-new-pvc-mnt-f46576646-dwgd9
Name: nginx-fc-new-pvc-mnt-f46576646-dwgd9
Namespace: default
Priority: 0
Node: worker-0/9.114.97.113
Start Time: Mon, 09 Nov 2020 03:07:19 -0500
Labels: app=nginx
pod-template-hash=f46576646
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "",
"interface": "eth0",
"ips": [
"10.131.0.84"
],
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "",
"interface": "eth0",
"ips": [
"10.131.0.84"
],
"default": true,
"dns": {}
}]
Status: Running
IP: 10.131.0.84
IPs:
IP: 10.131.0.84
Controlled By: ReplicaSet/nginx-fc-new-pvc-mnt-f46576646
Containers:
nginx-fc-new:
Container ID: cri-o://fa730825ff11f6f403e40d61caffc8eac8d7957198970b28f3cfb3498eeaeda8
Image: nginx:latest
Image ID: docker.io/library/nginx@sha256:44cbc8f1b1d4f2caae09062fd2a77c98c911c056433b4f93f04efc3623dccb6b
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 09 Nov 2020 03:07:27 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/mnt/fc from fc-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6p6d5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
fc-vol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: fc-new-pvc
ReadOnly: false
default-token-6p6d5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6p6d5
Optional: false
QoS Class: BestEffort
Node-Selectors: fcnode=wkr
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3h45m default-scheduler Successfully assigned default/nginx-fc-new-pvc-mnt-f46576646-dwgd9 to worker-0
Normal AddedInterface 3h45m multus Add eth0 [10.131.0.84/23]
Normal Pulling 3h45m kubelet Pulling image "nginx:latest"
Normal Pulled 3h45m kubelet Successfully pulled image "nginx:latest" in 725.617821ms
Normal Created 3h44m kubelet Created container nginx-fc-new
Normal Started 3h44m kubelet Started container nginx-fc-new
Thank you, Archana! I'll mark this as verified.
Scenario 2: Using wwid
1. Create a pod using the wwid id for 3GB disk attached to worker-1 . Create /mnt/fc1 on worker-1
2. Get the wwid for the 3GB disk at ls -l /dev/disk/by-id.
The 3GB disk has mpathb in lsblk. So, pick up the right wwid from /dev/disk/by-id for mpathb.
lrwxrwxrwx. 1 root root 10 Nov 10 07:45 dm-name-mpathb -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Nov 10 07:45 wwn-0x6005076d0281005ef00000000001058f -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Nov 10 07:45 scsi-36005076d0281005ef00000000001058f -> ../../dm-1
3. Create the pod with the wwid.
[root@fct-arc47-bastion targetwwn]# cat wwid-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: wwid-fc-pod
spec:
containers:
- image: ubuntu:latest
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
name: fc-wwid
volumeMounts:
- name: fc-vol2
mountPath: /mnt/fc1
volumes:
- name: fc-vol2
fc:
wwids:
- '36005076d0281005ef00000000001058f'
fsType: ext4
nodeSelector:
fcnode1: wkr1
[root@fct-arc47-bastion targetwwn]# oc get pods
NAME READY STATUS RESTARTS AGE
wwid-fc-pod 1/1 Running 0 135m
[root@fct-arc47-bastion targetwwn]# oc describe pod wwid-fc-pod
Name: wwid-fc-pod
Namespace: default
Priority: 0
Node: worker-1/9.114.97.112
Start Time: Tue, 10 Nov 2020 03:09:39 -0500
Labels: <none>
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "",
"interface": "eth0",
"ips": [
"10.128.2.80"
],
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "",
"interface": "eth0",
"ips": [
"10.128.2.80"
],
"default": true,
"dns": {}
}]
Status: Running
IP: 10.128.2.80
IPs:
IP: 10.128.2.80
Containers:
fc-wwid:
Container ID: cri-o://ba34b8e4b0ca13b1dd3e7996b969167f19f4571cf270a9cb860e6f39189651fe
Image: ubuntu:latest
Image ID: docker.io/library/ubuntu@sha256:ad426b7dd24d6fa923c1f46dce9a747082d0ef306716aefdc959e4cd23ffd22b
Port: <none>
Host Port: <none>
Command:
/bin/sh
-ec
while :; do echo '.'; sleep 5 ; done
State: Running
Started: Tue, 10 Nov 2020 03:09:42 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/mnt/fc1 from fc-vol2 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6p6d5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
fc-vol2:
Type: FC (a Fibre Channel disk)
TargetWWNs:
LUN: <none>
FSType: ext4
ReadOnly: false
default-token-6p6d5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6p6d5
Optional: false
QoS Class: BestEffort
Node-Selectors: fcnode1=wkr1
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 135m default-scheduler Successfully assigned default/wwid-fc-pod to worker-1
Normal AddedInterface 135m multus Add eth0 [10.128.2.80/23]
Normal Pulling 135m kubelet Pulling image "ubuntu:latest"
Normal Pulled 135m kubelet Successfully pulled image "ubuntu:latest" in 651.612475ms
Normal Created 135m kubelet Created container fc-wwid
Normal Started 135m kubelet Started container fc-wwid
4. Log into the pod and check the 3GB disk at /mnt/fc1. Create a file at the /mnt/fc1
[root@fct-arc47-bastion targetwwn]# oc exec -it wwid-fc-pod sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
|-sda1 8:1 0 4M 0 part
|-sda2 8:2 0 384M 0 part
`-sda4 8:4 0 119.6G 0 part
sdb 8:16 0 120G 0 disk
|-sdb1 8:17 0 4M 0 part
|-sdb2 8:18 0 384M 0 part
`-sdb4 8:20 0 119.6G 0 part
sdc 8:32 0 120G 0 disk
|-sdc1 8:33 0 4M 0 part
|-sdc2 8:34 0 384M 0 part
`-sdc4 8:36 0 119.6G 0 part
sdd 8:48 0 120G 0 disk
|-sdd1 8:49 0 4M 0 part
|-sdd2 8:50 0 384M 0 part
`-sdd4 8:52 0 119.6G 0 part
sde 8:64 0 3G 0 disk
sdf 8:80 0 6G 0 disk
sdg 8:96 0 3G 0 disk
sdh 8:112 0 6G 0 disk
sdi 8:128 0 3G 0 disk
sdj 8:144 0 6G 0 disk
sdk 8:160 0 3G 0 disk
sdl 8:176 0 6G 0 disk
# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 120G 18G 103G 15% /
tmpfs 64M 0 64M 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
tmpfs 16G 286M 16G 2% /etc/hostname
/dev/mapper/mpathb 2.9G 9.0M 2.9G 1% /mnt/fc1
/dev/mapper/coreos-luks-root-nocrypt 120G 18G 103G 15% /etc/hosts
tmpfs 16G 256K 16G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 16G 0 16G 0% /proc/scsi
tmpfs 16G 0 16G 0% /sys/firmware
# cd /mnt/fc1
# ls -l
total 16
drwx------. 2 root root 16384 Nov 10 08:06 lost+found
# pwd
/mnt/fc1
# touch aa
# ls -ltr
total 16
drwx------. 2 root root 16384 Nov 10 08:06 lost+found
-rw-r--r--. 1 root root 0 Nov 10 08:10 aa
# pwd
/mnt/fc1
Scneario 3: Using targetWWN
1. Create a pod using the targetWWN id for 6GB disk attached to worker-1 . Create /mnt/fc3 on worker-1
The lun id and targetWWn for the 6GB ( sdf ) disk at /dev/disk/by-path is
c-0x5005076802233f80-lun-2 -> ../../sdf
[root@fct-arc47-bastion targetwwn]# cat targetwwn-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: targetwwn-fc-pod
spec:
containers:
- image: ubuntu:latest
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
name: fc-targetwwn
volumeMounts:
- name: fc-vol3
mountPath: /mnt/fc3
volumes:
- name: fc-vol3
fc:
targetWWNs:
- '5005076802233f80'
#fsType: ext4
lun: 2
nodeSelector:
fcnode1: wkr1
[root@fct-arc47-bastion targetwwn]# oc describe pod targetwwn-fc-pod
Name: targetwwn-fc-pod
Namespace: default
Priority: 0
Node: worker-1/9.114.97.112
Start Time: Tue, 10 Nov 2020 05:10:42 -0500
Labels: <none>
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "",
"interface": "eth0",
"ips": [
"10.128.2.116"
],
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "",
"interface": "eth0",
"ips": [
"10.128.2.116"
],
"default": true,
"dns": {}
}]
Status: Running
IP: 10.128.2.116
IPs:
IP: 10.128.2.116
Containers:
fc-targetwwn:
Container ID: cri-o://c6840aa2da13b970d503e389e55410815a3e7dc4f33b4462c797463a4f0ad99c
Image: ubuntu:latest
Image ID: docker.io/library/ubuntu@sha256:ad426b7dd24d6fa923c1f46dce9a747082d0ef306716aefdc959e4cd23ffd22b
Port: <none>
Host Port: <none>
Command:
/bin/sh
-ec
while :; do echo '.'; sleep 5 ; done
State: Running
Started: Tue, 10 Nov 2020 05:10:54 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/mnt/fc3 from fc-vol3 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6p6d5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
fc-vol3:
Type: FC (a Fibre Channel disk)
TargetWWNs: 5005076802233f80
LUN: 2
FSType:
ReadOnly: false
default-token-6p6d5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6p6d5
Optional: false
QoS Class: BestEffort
Node-Selectors: fcnode1=wkr1
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m47s default-scheduler Successfully assigned default/targetwwn-fc-pod to worker-1
Normal SuccessfulAttachVolume 5m47s attachdetach-controller AttachVolume.Attach succeeded for volume "fc-vol3"
Normal AddedInterface 5m37s multus Add eth0 [10.128.2.116/23]
Normal Pulling 5m37s kubelet Pulling image "ubuntu:latest"
Normal Pulled 5m36s kubelet Successfully pulled image "ubuntu:latest" in 723.705968ms
Normal Created 5m36s kubelet Created container fc-targetwwn
Normal Started 5m35s kubelet Started container fc-targetwwn
[root@fct-arc47-bastion targetwwn]# oc get pods
NAME READY STATUS RESTARTS AGE
targetwwn-fc-pod 1/1 Running 0 34m
wwid-fc-pod 1/1 Running 0 155m
2. Login to the pod to check the mounted FC disk and create a file in that disk.
[root@fct-arc47-bastion targetwwn]# oc exec -it targetwwn-fc-pod sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# df -kh
Filesystem Size Used Avail Use% Mounted on
overlay 120G 18G 103G 15% /
tmpfs 64M 0 64M 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
tmpfs 16G 295M 16G 2% /etc/hostname
/dev/mapper/mpathc 5.9G 24M 5.9G 1% /mnt/fc3
/dev/mapper/coreos-luks-root-nocrypt 120G 18G 103G 15% /etc/hosts
tmpfs 16G 256K 16G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 16G 0 16G 0% /proc/scsi
tmpfs 16G 0 16G 0% /sys/firmware
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
|-sda1 8:1 0 4M 0 part
|-sda2 8:2 0 384M 0 part
`-sda4 8:4 0 119.6G 0 part
sdb 8:16 0 120G 0 disk
|-sdb1 8:17 0 4M 0 part
|-sdb2 8:18 0 384M 0 part
`-sdb4 8:20 0 119.6G 0 part
sdc 8:32 0 120G 0 disk
|-sdc1 8:33 0 4M 0 part
|-sdc2 8:34 0 384M 0 part
`-sdc4 8:36 0 119.6G 0 part
sdd 8:48 0 120G 0 disk
|-sdd1 8:49 0 4M 0 part
|-sdd2 8:50 0 384M 0 part
`-sdd4 8:52 0 119.6G 0 part
sde 8:64 0 3G 0 disk
sdf 8:80 0 6G 0 disk
sdg 8:96 0 3G 0 disk
sdh 8:112 0 6G 0 disk
sdi 8:128 0 3G 0 disk
sdj 8:144 0 6G 0 disk
sdk 8:160 0 3G 0 disk
sdl 8:176 0 6G 0 disk
# cd /mnt/fc3
# touch aa
# pwd
/mnt/fc3
# ls -ltr
total 16
drwx------. 2 root root 16384 Nov 10 10:10 lost+found
-rw-r--r--. 1 root root 0 Nov 10 10:12 aa
# echo "test" >aa
# cat aa
test
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633 |