Bug 1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster
Summary: FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.6
Hardware: ppc64le
OS: Linux
unspecified
medium
Target Milestone: ---
: 4.7.0
Assignee: Jan Safranek
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-10 06:40 UTC by Archana Prabhakar
Modified: 2021-02-24 15:25 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:24:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kubernetes pull 413 0 None closed Bug 1887026: UPSTREAM: 95451: Fix fcpath 2021-02-19 14:21:27 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:25:21 UTC

Description Archana Prabhakar 2020-10-10 06:40:07 UTC
Description of problem:

Test Scenario1: 

Create a pod and attach a FC volume to the pod using its wwpn.
	1.	Deploy a powervm cluster with 3 masters and 3 workers on a PowerVM set up that is connected to SAN.
	2.	Create a FC volume and attach it to a worker-0 node.
	3.	Provision a pod and attach the above FC using the targetWWN.

Issue - pod creation fails with “no fc disk found” error.

References for PV and PVC creation using FC.
https://docs.openshift.com/container-platform/4.5/storage/persistent_storage/persistent-storage-fibre.html

Version-Release number of selected component (if applicable):

[root@rcmpiop-arc46-bastion fc]# oc version
Client Version: 4.6.0-rc.0
Server Version: 4.6.0-rc.0
Kubernetes Version: v1.19.0+db1fc96

How reproducible:
Issue noticed on multiple OCP 4.6 builds.

Steps to Reproduce:

Test steps:

Pre requisites - OCP 4.6 cluster with FC volume attached to worker-0

1) Label worker-0 since it has the FC volume attached. Create a directory /mnt/fc1 on worker-0.
oc label node worker-0 fcnode=wkr

2) Create a pod.yaml file with the targetWWN of the attached volume.

[root@rcmpiop-arc46-bastion fc]# cat fc-pod.yaml
apiVersion: v1
kind: Pod
metadata:
 name: fibre-channel-example-pod
spec:
 containers:
   - image: kubernetes/pause
     name: fc1
     volumeMounts:
       - name: fc-vol1
         mountPath: /mnt/fc1
 volumes:
   - name: fc-vol1
     fc:
       targetWWNs:
         - 5005076802133f80
       lun: 2
       fsType: ext4
       readOnly: true
 nodeSelector:
     fcnode: wkr

3) When the pod is created, it gets into the following state.

[root@rcmpiop-arc46-bastion fc]# oc get pods
NAME                        READY   STATUS              RESTARTS   AGE
fibre-channel-example-pod   0/1     ContainerCreating   0          24h


4) The describe logs show that the targetwwn parameter has been parsed and picked up.

[root@rcmpiop-arc46-bastion fc]# oc describe pod fibre-channel-example-pod
Name:         fibre-channel-example-pod
Namespace:    default
Priority:     0
Node:         worker-0/9.114.97.21
Start Time:   Fri, 09 Oct 2020 01:52:57 -0400
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  fc1:
    Container ID:   
    Image:          kubernetes/pause
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/fc1 from fc-vol1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mssfq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  fc-vol1:
    Type:        FC (a Fibre Channel disk)
    TargetWWNs:  5005076802133f80
    LUN:         2
    FSType:      ext4
    ReadOnly:    true
  default-token-mssfq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-mssfq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  fcnode=wkr
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                   From     Message
  ----     ------       ----                  ----     -------
  Warning  FailedMount  87m (x133 over 24h)   kubelet  Unable to attach or mount volumes: unmounted volumes=[fc-vol1], unattached volumes=[default-token-mssfq fc-vol1]: timed out waiting for the condition
  Warning  FailedMount  12m (x501 over 24h)   kubelet  Unable to attach or mount volumes: unmounted volumes=[fc-vol1], unattached volumes=[fc-vol1 default-token-mssfq]: timed out waiting for the condition
  Warning  FailedMount  102s (x726 over 24h)  kubelet  MountVolume.WaitForAttach failed for volume "fc-vol1" : no fc disk found

5) On worker-0, trying to attach the 7GB disk to the pod. Check the output at the given path for the wwn number that is used in the pod.yaml file.

[root@worker-0 core]# ls -l /dev/disk/by-path | grep "5005076802133f80"
lrwxrwxrwx. 1 root root  9 Oct  8 16:18 fc-0x5005076802133f80-lun-0 -> ../../sdd
lrwxrwxrwx. 1 root root 10 Oct  8 16:18 fc-0x5005076802133f80-lun-0-part1 -> ../../sdd1
lrwxrwxrwx. 1 root root 10 Oct  8 16:18 fc-0x5005076802133f80-lun-0-part2 -> ../../sdd2
lrwxrwxrwx. 1 root root 10 Oct  8 16:18 fc-0x5005076802133f80-lun-0-part4 -> ../../sdd4
lrwxrwxrwx. 1 root root  9 Oct  8 16:18 fc-0x5005076802133f80-lun-1 -> ../../sdh
lrwxrwxrwx. 1 root root  9 Oct  8 16:50 fc-0x5005076802133f80-lun-2 -> ../../sdl
lrwxrwxrwx. 1 root root  9 Oct  8 16:18 fc-0xc050760a56ab13c4-0x5005076802133f80-lun-0 -> ../../sdd
lrwxrwxrwx. 1 root root 10 Oct  8 16:18 fc-0xc050760a56ab13c4-0x5005076802133f80-lun-0-part1 -> ../../sdd1
lrwxrwxrwx. 1 root root 10 Oct  8 16:18 fc-0xc050760a56ab13c4-0x5005076802133f80-lun-0-part2 -> ../../sdd2
lrwxrwxrwx. 1 root root 10 Oct  8 16:18 fc-0xc050760a56ab13c4-0x5005076802133f80-lun-0-part4 -> ../../sdd4
lrwxrwxrwx. 1 root root  9 Oct  8 16:18 fc-0xc050760a56ab13c4-0x5005076802133f80-lun-1 -> ../../sdh
lrwxrwxrwx. 1 root root  9 Oct  8 16:50 fc-0xc050760a56ab13c4-0x5005076802133f80-lun-2 -> ../../sdl


6) Check the fdisk -l output on worker-0. The pod.yaml is attempting to attach the 7GB disk.

[root@worker-0 core]# fdisk -l
Disk /dev/sdb: 120 GiB, 128849018880 bytes, 251658240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disklabel type: gpt
Disk identifier: 95C18BF4-D799-4899-BFE9-3A611E90AD88

Device      Start       End   Sectors   Size Type
/dev/sdb1    2048     10239      8192     4M PowerPC PReP boot
/dev/sdb2   10240    796671    786432   384M Linux filesystem
/dev/sdb4  796672 251658206 250861535 119.6G Linux filesystem


Disk /dev/sda: 120 GiB, 128849018880 bytes, 251658240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disklabel type: gpt
Disk identifier: 95C18BF4-D799-4899-BFE9-3A611E90AD88

Device      Start       End   Sectors   Size Type
/dev/sda1    2048     10239      8192     4M PowerPC PReP boot
/dev/sda2   10240    796671    786432   384M Linux filesystem
/dev/sda4  796672 251658206 250861535 119.6G Linux filesystem


Disk /dev/sdc: 120 GiB, 128849018880 bytes, 251658240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disklabel type: gpt
Disk identifier: 95C18BF4-D799-4899-BFE9-3A611E90AD88

Device      Start       End   Sectors   Size Type
/dev/sdc1    2048     10239      8192     4M PowerPC PReP boot
/dev/sdc2   10240    796671    786432   384M Linux filesystem
/dev/sdc4  796672 251658206 250861535 119.6G Linux filesystem


Disk /dev/sdd: 120 GiB, 128849018880 bytes, 251658240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disklabel type: gpt
Disk identifier: 95C18BF4-D799-4899-BFE9-3A611E90AD88

Device      Start       End   Sectors   Size Type
/dev/sdd1    2048     10239      8192     4M PowerPC PReP boot
/dev/sdd2   10240    796671    786432   384M Linux filesystem
/dev/sdd4  796672 251658206 250861535 119.6G Linux filesystem


Disk /dev/mapper/coreos-luks-root-nocrypt: 119.6 GiB, 128424328704 bytes, 250828767 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Disk /dev/sde: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Disk /dev/sdf: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Disk /dev/sdg: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Disk /dev/sdh: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Disk /dev/sdi: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Disk /dev/sdj: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Disk /dev/sdk: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Disk /dev/sdl: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Disk /dev/mapper/mpathc: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes


Note - The below method of FC attach is working fine on the same cluster using wwids as the parameter.

[root@rcmpiop-arc46-bastion fc]# cat arc.yaml
apiVersion: v1
kind: Pod
metadata:
 name: arc-wwid-pod
spec:
 containers:
   - image: ubuntu:latest
     command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
     name: fc
     volumeMounts:
       - name: fc-vol
         mountPath: /mnt/fc
 volumes:
   - name: fc-vol
     fc:
       wwids:
         - 36005076d0281005ef00000000000f395
       fsType: ext4
 nodeSelector:
     fcnode: wkr



Actual results:

Issue - pod creation fails with “no fc disk found” error.

Expected results:

pod should get created and fc volume should get mounted on the pod using targetWWNs.

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 1 mkumatag 2020-10-12 07:02:55 UTC
This is happening due to failure of pattern match of the fcpath in the k8s code, created an issue in the k8s community and submitted a PR for the same:

https://github.com/kubernetes/kubernetes/issues/95450
https://github.com/kubernetes/kubernetes/pull/95451

Comment 3 mkumatag 2020-10-17 15:51:29 UTC
(In reply to mkumatag from comment #1)
> This is happening due to failure of pattern match of the fcpath in the k8s
> code, created an issue in the k8s community and submitted a PR for the same:
> 
> https://github.com/kubernetes/kubernetes/issues/95450
> https://github.com/kubernetes/kubernetes/pull/95451

This patch got merged and even the cherry-pick https://github.com/kubernetes/kubernetes/pull/95610 for 1.19 release as well. Not sure how to get this into the OCP repository!

Comment 4 Jan Safranek 2020-10-19 08:05:10 UTC
Thanks for the upstream patch!

> Not sure how to get this into the OCP repository!

We can take it from here and it should get merged soon. Not sure about our QA environment, they may need some help verifying the bugfix once it's merged and part of 4.7 nightly.

Comment 5 mkumatag 2020-10-19 09:31:37 UTC
(In reply to Jan Safranek from comment #4)
> Thanks for the upstream patch!
> 
> > Not sure how to get this into the OCP repository!
> 
> We can take it from here and it should get merged soon. Not sure about our
> QA environment, they may need some help verifying the bugfix once it's
> merged and part of 4.7 nightly.

np, we can help in verifying the issue once it gets into the build.

Comment 8 Archana Prabhakar 2020-11-09 12:00:49 UTC
Bug verified on below 4.7 nightly.

[root@fct-arc47-bastion ~]# oc version
Client Version: 4.7.0-0.nightly-ppc64le-2020-11-03-090148
Server Version: 4.7.0-0.nightly-ppc64le-2020-11-03-090148
Kubernetes Version: v1.19.0+74d9cb5


From bastion, create a PV and PVC using the new 5GB data volume attached using its wwn  and lun id from output of previous step.

[root@fct-arc47-bastion fc]# cat  fcnew-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: fc-new-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  #volumeMode: Block
  #persistentVolumeReclaimPolicy: Retain
  storageClassName: manual
  fc:
    targetWWNs: ['5005076802233f81']
    lun: 3
    readOnly: true
    fsType: ext4

[root@fct-arc47-bastion fc]# oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                   STORAGECLASS              REASON   AGE
fc-new-pv                                  5Gi        RWO            Retain           Available                                           manual                             2s


[root@fct-arc47-bastion fc]# cat fcnew-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fc-new-pvc
spec:
  accessModes:
    - ReadWriteOnce
  #volumeMode: Block
  resources:
    requests:
      storage: 5Gi
  storageClassName: manual


[root@fct-arc47-bastion fc]# oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                   STORAGECLASS              REASON   AGE
fc-new-pv                                  5Gi        RWO            Retain           Bound    default/fc-new-pvc                      manual                             2m1s
pvc-9b6c54fd-4dd4-4a1a-97d1-5b0453bc33ce   20Gi       RWX            Delete           Bound    openshift-image-registry/registry-pvc   nfs-storage-provisioner            5d19h

[root@fct-arc47-bastion fc]# oc get pvc
NAME           STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
fc-new-pvc     Bound    fc-new-pv     5Gi        RWO            manual         10s

Create a pod using the above PVC

[root@fct-arc47-bastion fc]# cat fc-new-pvc-pod.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-fc-new-pvc-mnt
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-fc-new
        image: nginx:latest
        ports:
        - containerPort: 80
        volumeMounts:
        - name: fc-vol
          mountPath: /mnt/fc
      volumes:
      - name: fc-vol
        persistentVolumeClaim:
          claimName: fc-new-pvc
      nodeSelector:
          fcnode: wkr

Login to the pod and do the following checks.

# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                            8:0    0   120G  0 disk  
|-sda1                         8:1    0     4M  0 part  
|-sda2                         8:2    0   384M  0 part  
`-sda4                         8:4    0 119.6G  0 part  
sdb                            8:16   0   120G  0 disk  
|-sdb1                         8:17   0     4M  0 part  
|-sdb2                         8:18   0   384M  0 part  
`-sdb4                         8:20   0 119.6G  0 part  
sdc                            8:32   0   120G  0 disk  
|-sdc1                         8:33   0     4M  0 part  
|-sdc2                         8:34   0   384M  0 part  
`-sdc4                         8:36   0 119.6G  0 part  
sdd                            8:48   0   120G  0 disk  
|-sdd1                         8:49   0     4M  0 part  
|-sdd2                         8:50   0   384M  0 part  
`-sdd4                         8:52   0 119.6G  0 part  
  `-coreos-luks-root-nocrypt 253:0    0 119.6G  0 dm    /dev/termination-log
sde                            8:64   0     7G  0 disk  
`-mpathb                     253:1    0     7G  0 mpath 
sdf                            8:80   0     7G  0 disk  
`-mpathb                     253:1    0     7G  0 mpath 
sdg                            8:96   0     7G  0 disk  
`-mpathb                     253:1    0     7G  0 mpath 
sdh                            8:112  0     7G  0 disk  
`-mpathb                     253:1    0     7G  0 mpath 
sdi                            8:128  0     4G  0 disk  
`-mpathc                     253:2    0     4G  0 mpath 
sdj                            8:144  0     4G  0 disk  
`-mpathc                     253:2    0     4G  0 mpath 
sdk                            8:160  0     4G  0 disk  
`-mpathc                     253:2    0     4G  0 mpath 
sdl                            8:176  0     4G  0 disk  
`-mpathc                     253:2    0     4G  0 mpath 
sdm                            8:192  0     5G  0 disk  
`-mpathd                     253:3    0     5G  0 mpath /mnt/fc
sdn                            8:208  0     5G  0 disk  
`-mpathd                     253:3    0     5G  0 mpath /mnt/fc
sdo                            8:224  0     5G  0 disk  
`-mpathd                     253:3    0     5G  0 mpath /mnt/fc
sdp                            8:240  0     5G  0 disk  
`-mpathd                     253:3    0     5G  0 mpath /mnt/fc


Ensure that the lsblk shows the new mount path [ /mnt/fc ] mapped to the 5GB disk.

[root@fct-arc47-bastion fc]# oc exec -it nginx-fc-new-pvc-mnt-f46576646-dwgd9 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.

# df -h
Filesystem                            Size  Used Avail Use% Mounted on
overlay                               120G   16G  104G  14% /
tmpfs                                  64M     0   64M   0% /dev
tmpfs                                  16G     0   16G   0% /sys/fs/cgroup
shm                                    64M     0   64M   0% /dev/shm
tmpfs                                  16G  131M   16G   1% /etc/hostname
/dev/mapper/mpathd                    4.9G   20M  4.9G   1% /mnt/fc
/dev/mapper/coreos-luks-root-nocrypt  120G   16G  104G  14% /etc/hosts
tmpfs                                  16G  256K   16G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                  16G     0   16G   0% /proc/scsi
tmpfs                                  16G     0   16G   0% /sys/firmware


Create a new file in the /mnt/fc path to ensure that the path is writable in the pod.

# cd /mnt/fc
# ls -l
total 16
drwx------. 2 root root 16384 Nov  9 08:07 lost+found
# touch aa
# ls -ltr
total 16
drwx------. 2 root root 16384 Nov  9 08:07 lost+found
-rw-r--r--. 1 root root     0 Nov  9 08:11 aa
# pwd
/mnt/fc


On worker-0, run lsblk to ensure that the 5Gb disk path shows the pod mounted to this FC disk.

[root@worker-0 core]# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                            8:0    0   120G  0 disk  
|-sda1                         8:1    0     4M  0 part  
|-sda2                         8:2    0   384M  0 part  
`-sda4                         8:4    0 119.6G  0 part  
sdb                            8:16   0   120G  0 disk  
|-sdb1                         8:17   0     4M  0 part  
|-sdb2                         8:18   0   384M  0 part  
`-sdb4                         8:20   0 119.6G  0 part  
sdc                            8:32   0   120G  0 disk  
|-sdc1                         8:33   0     4M  0 part  
|-sdc2                         8:34   0   384M  0 part  
`-sdc4                         8:36   0 119.6G  0 part  
sdd                            8:48   0   120G  0 disk  
|-sdd1                         8:49   0     4M  0 part  
|-sdd2                         8:50   0   384M  0 part  /boot
`-sdd4                         8:52   0 119.6G  0 part  
  `-coreos-luks-root-nocrypt 253:0    0 119.6G  0 dm    /sysroot
sde                            8:64   0     7G  0 disk  
`-mpathb                     253:1    0     7G  0 mpath 
sdf                            8:80   0     7G  0 disk  
`-mpathb                     253:1    0     7G  0 mpath 
sdg                            8:96   0     7G  0 disk  
`-mpathb                     253:1    0     7G  0 mpath 
sdh                            8:112  0     7G  0 disk  
`-mpathb                     253:1    0     7G  0 mpath 
sdi                            8:128  0     4G  0 disk  
`-mpathc                     253:2    0     4G  0 mpath 
sdj                            8:144  0     4G  0 disk  
`-mpathc                     253:2    0     4G  0 mpath 
sdk                            8:160  0     4G  0 disk  
`-mpathc                     253:2    0     4G  0 mpath 
sdl                            8:176  0     4G  0 disk  
`-mpathc                     253:2    0     4G  0 mpath 
sdm                            8:192  0     5G  0 disk  
`-mpathd                     253:3    0     5G  0 mpath /var/lib/kubelet/pods/b91334fd-2b22-4518-8913-213854f5a277/volumes/kubernetes.io~fc/fc-new-pv
sdn                            8:208  0     5G  0 disk  
`-mpathd                     253:3    0     5G  0 mpath /var/lib/kubelet/pods/b91334fd-2b22-4518-8913-213854f5a277/volumes/kubernetes.io~fc/fc-new-pv
sdo                            8:224  0     5G  0 disk  
`-mpathd                     253:3    0     5G  0 mpath /var/lib/kubelet/pods/b91334fd-2b22-4518-8913-213854f5a277/volumes/kubernetes.io~fc/fc-new-pv
sdp                            8:240  0     5G  0 disk  
`-mpathd                     253:3    0     5G  0 mpath /var/lib/kubelet/pods/b91334fd-2b22-4518-8913-213854f5a277/volumes/kubernetes.io~fc/fc-new-pv


[root@fct-arc47-bastion ~]# oc describe pv fc-new-pv 
Name:            fc-new-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    manual
Status:          Bound
Claim:           default/fc-new-pvc
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        5Gi
Node Affinity:   <none>
Message:         
Source:
    Type:        FC (a Fibre Channel disk)
    TargetWWNs:  5005076802233f81
    LUN:         3
    FSType:      ext4
    ReadOnly:    true
Events:          <none>

[root@fct-arc47-bastion ~]# oc describe pod nginx-fc-new-pvc-mnt-f46576646-dwgd9
Name:         nginx-fc-new-pvc-mnt-f46576646-dwgd9
Namespace:    default
Priority:     0
Node:         worker-0/9.114.97.113
Start Time:   Mon, 09 Nov 2020 03:07:19 -0500
Labels:       app=nginx
              pod-template-hash=f46576646
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.131.0.84"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.131.0.84"
                    ],
                    "default": true,
                    "dns": {}
                }]
Status:       Running
IP:           10.131.0.84
IPs:
  IP:           10.131.0.84
Controlled By:  ReplicaSet/nginx-fc-new-pvc-mnt-f46576646
Containers:
  nginx-fc-new:
    Container ID:   cri-o://fa730825ff11f6f403e40d61caffc8eac8d7957198970b28f3cfb3498eeaeda8
    Image:          nginx:latest
    Image ID:       docker.io/library/nginx@sha256:44cbc8f1b1d4f2caae09062fd2a77c98c911c056433b4f93f04efc3623dccb6b
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 09 Nov 2020 03:07:27 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/fc from fc-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6p6d5 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  fc-vol:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  fc-new-pvc
    ReadOnly:   false
  default-token-6p6d5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6p6d5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  fcnode=wkr
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age    From               Message
  ----    ------          ----   ----               -------
  Normal  Scheduled       3h45m  default-scheduler  Successfully assigned default/nginx-fc-new-pvc-mnt-f46576646-dwgd9 to worker-0
  Normal  AddedInterface  3h45m  multus             Add eth0 [10.131.0.84/23]
  Normal  Pulling         3h45m  kubelet            Pulling image "nginx:latest"
  Normal  Pulled          3h45m  kubelet            Successfully pulled image "nginx:latest" in 725.617821ms
  Normal  Created         3h44m  kubelet            Created container nginx-fc-new
  Normal  Started         3h44m  kubelet            Started container nginx-fc-new

Comment 9 Qin Ping 2020-11-10 02:04:45 UTC
Thank you, Archana!

I'll mark this as verified.

Comment 10 Archana Prabhakar 2020-11-10 10:55:45 UTC
Scenario 2: Using wwid

1. Create a pod using the wwid id for 3GB disk attached to worker-1 . Create /mnt/fc1 on worker-1


2. Get the wwid for the 3GB disk at ls -l /dev/disk/by-id.

The 3GB disk has mpathb in lsblk. So, pick up the right wwid from /dev/disk/by-id for mpathb.

lrwxrwxrwx. 1 root root 10 Nov 10 07:45 dm-name-mpathb -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Nov 10 07:45 wwn-0x6005076d0281005ef00000000001058f -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Nov 10 07:45 scsi-36005076d0281005ef00000000001058f -> ../../dm-1

3. Create the pod with the wwid.

[root@fct-arc47-bastion targetwwn]# cat wwid-pod.yaml
apiVersion: v1
kind: Pod
metadata:
 name: wwid-fc-pod
spec:
 containers:
   - image: ubuntu:latest
     command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
     name: fc-wwid
     volumeMounts:
       - name: fc-vol2
         mountPath: /mnt/fc1
 volumes:
   - name: fc-vol2
     fc:
       wwids:
         - '36005076d0281005ef00000000001058f'
       fsType: ext4
 nodeSelector:
     fcnode1: wkr1

[root@fct-arc47-bastion targetwwn]# oc get pods
NAME                                   READY   STATUS              RESTARTS   AGE
wwid-fc-pod                            1/1     Running             0          135m

[root@fct-arc47-bastion targetwwn]# oc describe pod wwid-fc-pod
Name:         wwid-fc-pod
Namespace:    default
Priority:     0
Node:         worker-1/9.114.97.112
Start Time:   Tue, 10 Nov 2020 03:09:39 -0500
Labels:       <none>
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.80"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.80"
                    ],
                    "default": true,
                    "dns": {}
                }]
Status:       Running
IP:           10.128.2.80
IPs:
  IP:  10.128.2.80
Containers:
  fc-wwid:
    Container ID:  cri-o://ba34b8e4b0ca13b1dd3e7996b969167f19f4571cf270a9cb860e6f39189651fe
    Image:         ubuntu:latest
    Image ID:      docker.io/library/ubuntu@sha256:ad426b7dd24d6fa923c1f46dce9a747082d0ef306716aefdc959e4cd23ffd22b
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -ec
      while :; do echo '.'; sleep 5 ; done
    State:          Running
      Started:      Tue, 10 Nov 2020 03:09:42 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/fc1 from fc-vol2 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6p6d5 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  fc-vol2:
    Type:        FC (a Fibre Channel disk)
    TargetWWNs:  
    LUN:         <none>
    FSType:      ext4
    ReadOnly:    false
  default-token-6p6d5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6p6d5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  fcnode1=wkr1
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  Scheduled       135m  default-scheduler  Successfully assigned default/wwid-fc-pod to worker-1
  Normal  AddedInterface  135m  multus             Add eth0 [10.128.2.80/23]
  Normal  Pulling         135m  kubelet            Pulling image "ubuntu:latest"
  Normal  Pulled          135m  kubelet            Successfully pulled image "ubuntu:latest" in 651.612475ms
  Normal  Created         135m  kubelet            Created container fc-wwid
  Normal  Started         135m  kubelet            Started container fc-wwid


4. Log into the pod and check the 3GB disk at /mnt/fc1. Create a file at the /mnt/fc1
    
[root@fct-arc47-bastion targetwwn]# oc exec -it wwid-fc-pod  sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.

# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   120G  0 disk 
|-sda1   8:1    0     4M  0 part 
|-sda2   8:2    0   384M  0 part 
`-sda4   8:4    0 119.6G  0 part 
sdb      8:16   0   120G  0 disk 
|-sdb1   8:17   0     4M  0 part 
|-sdb2   8:18   0   384M  0 part 
`-sdb4   8:20   0 119.6G  0 part 
sdc      8:32   0   120G  0 disk 
|-sdc1   8:33   0     4M  0 part 
|-sdc2   8:34   0   384M  0 part 
`-sdc4   8:36   0 119.6G  0 part 
sdd      8:48   0   120G  0 disk 
|-sdd1   8:49   0     4M  0 part 
|-sdd2   8:50   0   384M  0 part 
`-sdd4   8:52   0 119.6G  0 part 
sde      8:64   0     3G  0 disk 
sdf      8:80   0     6G  0 disk 
sdg      8:96   0     3G  0 disk 
sdh      8:112  0     6G  0 disk 
sdi      8:128  0     3G  0 disk 
sdj      8:144  0     6G  0 disk 

sdk      8:160  0     3G  0 disk 
sdl      8:176  0     6G  0 disk 

# df -h
Filesystem                            Size  Used Avail Use% Mounted on
overlay                               120G   18G  103G  15% /
tmpfs                                  64M     0   64M   0% /dev
tmpfs                                  16G     0   16G   0% /sys/fs/cgroup
shm                                    64M     0   64M   0% /dev/shm
tmpfs                                  16G  286M   16G   2% /etc/hostname
/dev/mapper/mpathb                    2.9G  9.0M  2.9G   1% /mnt/fc1
/dev/mapper/coreos-luks-root-nocrypt  120G   18G  103G  15% /etc/hosts
tmpfs                                  16G  256K   16G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                  16G     0   16G   0% /proc/scsi
tmpfs                                  16G     0   16G   0% /sys/firmware

# cd /mnt/fc1

# ls -l
total 16
drwx------. 2 root root 16384 Nov 10 08:06 lost+found

# pwd
/mnt/fc1

# touch aa

# ls -ltr
total 16
drwx------. 2 root root 16384 Nov 10 08:06 lost+found
-rw-r--r--. 1 root root     0 Nov 10 08:10 aa

# pwd
/mnt/fc1



Scneario 3: Using targetWWN

1. Create a pod using the targetWWN id for 6GB disk attached to worker-1 . Create /mnt/fc3 on worker-1

The lun id and targetWWn for the 6GB ( sdf ) disk at /dev/disk/by-path is

c-0x5005076802233f80-lun-2 -> ../../sdf

[root@fct-arc47-bastion targetwwn]# cat targetwwn-pod.yaml
apiVersion: v1
kind: Pod
metadata:
 name: targetwwn-fc-pod
spec:
 containers:
   - image: ubuntu:latest
     command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
     name: fc-targetwwn
     volumeMounts:
       - name: fc-vol3
         mountPath: /mnt/fc3
 volumes:
   - name: fc-vol3
     fc:
       targetWWNs:
         - '5005076802233f80'
       #fsType: ext4
       lun: 2
 nodeSelector:
     fcnode1: wkr1

[root@fct-arc47-bastion targetwwn]# oc describe pod targetwwn-fc-pod
Name:         targetwwn-fc-pod
Namespace:    default
Priority:     0
Node:         worker-1/9.114.97.112
Start Time:   Tue, 10 Nov 2020 05:10:42 -0500
Labels:       <none>
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.116"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.116"
                    ],
                    "default": true,
                    "dns": {}
                }]
Status:       Running
IP:           10.128.2.116
IPs:
  IP:  10.128.2.116
Containers:
  fc-targetwwn:
    Container ID:  cri-o://c6840aa2da13b970d503e389e55410815a3e7dc4f33b4462c797463a4f0ad99c
    Image:         ubuntu:latest
    Image ID:      docker.io/library/ubuntu@sha256:ad426b7dd24d6fa923c1f46dce9a747082d0ef306716aefdc959e4cd23ffd22b
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -ec
      while :; do echo '.'; sleep 5 ; done
    State:          Running
      Started:      Tue, 10 Nov 2020 05:10:54 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/fc3 from fc-vol3 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6p6d5 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  fc-vol3:
    Type:        FC (a Fibre Channel disk)
    TargetWWNs:  5005076802233f80
    LUN:         2
    FSType:      
    ReadOnly:    false
  default-token-6p6d5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6p6d5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  fcnode1=wkr1
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason                  Age    From                     Message
  ----    ------                  ----   ----                     -------
  Normal  Scheduled               5m47s  default-scheduler        Successfully assigned default/targetwwn-fc-pod to worker-1
  Normal  SuccessfulAttachVolume  5m47s  attachdetach-controller  AttachVolume.Attach succeeded for volume "fc-vol3"
  Normal  AddedInterface          5m37s  multus                   Add eth0 [10.128.2.116/23]
  Normal  Pulling                 5m37s  kubelet                  Pulling image "ubuntu:latest"
  Normal  Pulled                  5m36s  kubelet                  Successfully pulled image "ubuntu:latest" in 723.705968ms
  Normal  Created                 5m36s  kubelet                  Created container fc-targetwwn
  Normal  Started                 5m35s  kubelet                  Started container fc-targetwwn

[root@fct-arc47-bastion targetwwn]# oc get pods
NAME                                   READY   STATUS              RESTARTS   AGE
targetwwn-fc-pod                       1/1     Running             0          34m
wwid-fc-pod                            1/1     Running             0          155m


2. Login to the pod to check the mounted FC disk and create a file in that disk.

[root@fct-arc47-bastion targetwwn]# oc exec -it targetwwn-fc-pod sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.

# df -kh
Filesystem                            Size  Used Avail Use% Mounted on
overlay                               120G   18G  103G  15% /
tmpfs                                  64M     0   64M   0% /dev
tmpfs                                  16G     0   16G   0% /sys/fs/cgroup
shm                                    64M     0   64M   0% /dev/shm
tmpfs                                  16G  295M   16G   2% /etc/hostname
/dev/mapper/mpathc                    5.9G   24M  5.9G   1% /mnt/fc3
/dev/mapper/coreos-luks-root-nocrypt  120G   18G  103G  15% /etc/hosts
tmpfs                                  16G  256K   16G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                  16G     0   16G   0% /proc/scsi
tmpfs                                  16G     0   16G   0% /sys/firmware

# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   120G  0 disk 
|-sda1   8:1    0     4M  0 part 
|-sda2   8:2    0   384M  0 part 
`-sda4   8:4    0 119.6G  0 part 
sdb      8:16   0   120G  0 disk 
|-sdb1   8:17   0     4M  0 part 
|-sdb2   8:18   0   384M  0 part 
`-sdb4   8:20   0 119.6G  0 part 
sdc      8:32   0   120G  0 disk 
|-sdc1   8:33   0     4M  0 part 
|-sdc2   8:34   0   384M  0 part 
`-sdc4   8:36   0 119.6G  0 part 
sdd      8:48   0   120G  0 disk 
|-sdd1   8:49   0     4M  0 part 
|-sdd2   8:50   0   384M  0 part 
`-sdd4   8:52   0 119.6G  0 part 
sde      8:64   0     3G  0 disk 
sdf      8:80   0     6G  0 disk 
sdg      8:96   0     3G  0 disk 
sdh      8:112  0     6G  0 disk 
sdi      8:128  0     3G  0 disk 
sdj      8:144  0     6G  0 disk 
sdk      8:160  0     3G  0 disk 
sdl      8:176  0     6G  0 disk 

# cd /mnt/fc3

# touch aa

# pwd
/mnt/fc3

# ls -ltr
total 16
drwx------. 2 root root 16384 Nov 10 10:10 lost+found
-rw-r--r--. 1 root root     0 Nov 10 10:12 aa

# echo "test" >aa

# cat aa
test

Comment 13 errata-xmlrpc 2021-02-24 15:24:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.