Description of problem: Could not provision pv when no symlink and target found on rhel worker Version-Release number of selected component (if applicable): local-storage-operator.4.7.0-202101160343.p0 cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.9 (Maipo) How reproducible: Always Steps to Reproduce: 1.Deploy LocalStorageOperator 2.Create localvolumeset apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: lvs-loop namespace: openshift-local-storage spec: deviceInclusionSpec: deviceTypes: - disk minSize: 0Ti maxDeviceCount: 1 storageClassName: lvs-loop volumeMode: Filesystem 3.lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 30G 0 disk |-xvda1 202:1 0 1M 0 part `-xvda2 202:2 0 30G 0 part / xvdf 202:80 0 1G 0 disk oc logs pods/diskmaker-manager-vnjx2 I0119 01:39:24.476468 1 common.go:334] StorageClass "lvs-loop" configured with MountDir "/mnt/local-storage/lvs-loop", HostDir "/mnt/local-storage/lvs-loop", VolumeMode "Filesystem", FsType "", BlockCleanerCommand ["/scripts/quick_reset.sh"] {"level":"info","ts":1611020364.4978468,"logger":"localvolumeset-symlink-controller","msg":"filter negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvda","filter.Name":"noChildren"} {"level":"info","ts":1611020364.49792,"logger":"localvolumeset-symlink-controller","msg":"match negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvda1","matcher.Name":"inTypeList"} {"level":"info","ts":1611020364.4979486,"logger":"localvolumeset-symlink-controller","msg":"filter negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvda2","filter.Name":"noFilesystemSignature"} {"level":"info","ts":1611020364.498047,"logger":"localvolumeset-symlink-controller","msg":"matched disk","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvdf"} {"level":"error","ts":1611020364.4980767,"logger":"localvolumeset-symlink-controller","msg":"error while discovering symlink source and target","Request.Namespace":"openshift-local-storage","Request.Name":"lvs-loop","Device.Name":"xvdf","error":"IDPathNotFoundError: a symlink to \"xvdf\" was not found in \"/dev/disk/by-id/\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/local-storage-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/openshift/local-storage-operator/pkg/diskmaker/controllers/lvset.(*ReconcileLocalVolumeSet).Reconcile\n\t/go/src/github.com/openshift/local-storage-operator/pkg/diskmaker/controllers/lvset/reconcile.go:152\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/openshift/local-storage-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/local-storage-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/openshift/local-storage-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/local-storage-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/local-storage-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/local-storage-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} Actual results: PV is not provisioned Expected results: PV is provisioned. Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info: ls -l /dev/disk/ total 0 drwxr-xr-x. 2 root root 80 Jan 18 03:37 by-partuuid drwxr-xr-x. 2 root root 60 Jan 18 03:37 by-uuid
All the flakes are caused by connection to API server being rejected or times out. We track it in 1890131. I did not notice a single flake of the test caused by storage / fsgroup implementation in the past 7 days. *** This bug has been marked as a duplicate of bug 1890131 ***
Sorry, wrong bug. This is not a duplicate.
spec: accessModes: - ReadWriteOnce capacity: storage: 1Gi local: path: /mnt/local-storage/lvs/xvdf nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-60-36.us-east-2.compute.internal persistentVolumeReclaimPolicy: Delete storageClassName: lvs volumeMode: Block status: phase: Available sh-4.2# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 30G 0 disk |-xvda1 202:1 0 1M 0 part `-xvda2 202:2 0 30G 0 part / xvdf 202:80 0 1G 0 disk sh-4.2# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.9 (Maipo) local-storage-operator.4.7.0-202101262230.p0 Local Storage 4.7.0-202101262230.p0 Succeeded
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633