Bug 1917678 - Could not provision pv when no symlink and target found on rhel worker
Summary: Could not provision pv when no symlink and target found on rhel worker
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.7.0
Assignee: Hemant Kumar
QA Contact: Chao Yang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-19 06:22 UTC by Chao Yang
Modified: 2021-02-24 15:54 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:54:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift local-storage-operator pull 202 0 None open Bug 1917678: Fix no PV being provisioned for missing dev by-id symlink 2021-01-26 02:34:12 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:54:56 UTC

Description Chao Yang 2021-01-19 06:22:40 UTC
Description of problem:
Could not provision pv when no symlink and target found on rhel worker

Version-Release number of selected component (if applicable):
local-storage-operator.4.7.0-202101160343.p0
cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.9 (Maipo)

How reproducible:
Always

Steps to Reproduce:
1.Deploy LocalStorageOperator
2.Create localvolumeset
apiVersion: local.storage.openshift.io/v1alpha1
kind: LocalVolumeSet
metadata:
  name: lvs-loop
  namespace: openshift-local-storage
spec:
  deviceInclusionSpec:
    deviceTypes:
      - disk
    minSize: 0Ti
  maxDeviceCount: 1
  storageClassName: lvs-loop
  volumeMode: Filesystem
3.lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  30G  0 disk 
|-xvda1 202:1    0   1M  0 part 
`-xvda2 202:2    0  30G  0 part /
xvdf    202:80   0   1G  0 disk 

oc logs pods/diskmaker-manager-vnjx2
I0119 01:39:24.476468       1 common.go:334] StorageClass "lvs-loop" configured with MountDir "/mnt/local-storage/lvs-loop", HostDir "/mnt/local-storage/lvs-loop", VolumeMode "Filesystem", FsType "", BlockCleanerCommand ["/scripts/quick_reset.sh"]
{"level":"info","ts":1611020364.4978468,"logger":"localvolumeset-symlink-controller","msg":"filter negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvda","filter.Name":"noChildren"}
{"level":"info","ts":1611020364.49792,"logger":"localvolumeset-symlink-controller","msg":"match negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvda1","matcher.Name":"inTypeList"}
{"level":"info","ts":1611020364.4979486,"logger":"localvolumeset-symlink-controller","msg":"filter negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvda2","filter.Name":"noFilesystemSignature"}
{"level":"info","ts":1611020364.498047,"logger":"localvolumeset-symlink-controller","msg":"matched disk","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvdf"}
{"level":"error","ts":1611020364.4980767,"logger":"localvolumeset-symlink-controller","msg":"error while discovering symlink source and target","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvdf","error":"IDPathNotFoundError: a symlink to  \"xvdf\" was not found in \"/dev/disk/by-id/\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/local-storage-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/openshift/local-storage-operator/pkg/diskmaker/controllers/lvset.(*ReconcileLocalVolumeSet).Reconcile\n\t/go/src/github.com/openshift/local-storage-operator/pkg/diskmaker/controllers/lvset/reconcile.go:152\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/openshift/local-storage-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/local-storage-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/openshift/local-storage-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/local-storage-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/local-storage-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/local-storage-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Actual results:
PV is not provisioned

Expected results:
PV is provisioned.

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:
ls -l /dev/disk/   
total 0
drwxr-xr-x. 2 root root 80 Jan 18 03:37 by-partuuid
drwxr-xr-x. 2 root root 60 Jan 18 03:37 by-uuid

Comment 1 Jan Safranek 2021-01-19 10:05:39 UTC
All the flakes are caused by connection to API server being rejected or times out. We track it in 1890131. I did not notice a single flake of the test caused by storage / fsgroup implementation in the past 7 days.

*** This bug has been marked as a duplicate of bug 1890131 ***

Comment 2 Jan Safranek 2021-01-19 15:00:37 UTC
Sorry, wrong bug. This is not a duplicate.

Comment 4 Chao Yang 2021-01-29 12:36:36 UTC
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  local:
    path: /mnt/local-storage/lvs/xvdf
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - ip-10-0-60-36.us-east-2.compute.internal
  persistentVolumeReclaimPolicy: Delete
  storageClassName: lvs
  volumeMode: Block
status:
  phase: Available

sh-4.2# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  30G  0 disk 
|-xvda1 202:1    0   1M  0 part 
`-xvda2 202:2    0  30G  0 part /
xvdf    202:80   0   1G  0 disk 
sh-4.2# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.9 (Maipo)

local-storage-operator.4.7.0-202101262230.p0   Local Storage                      4.7.0-202101262230.p0              Succeeded

Comment 7 errata-xmlrpc 2021-02-24 15:54:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.