Bug 2164633 - KubePersistentVolumeFillingUp - False Alert firing for PVCs 0% free inodes
Summary: KubePersistentVolumeFillingUp - False Alert firing for PVCs 0% free inodes
Keywords:
Status: CLOSED DUPLICATE of bug 2132270
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph-monitoring
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Juan Miguel Olmo
QA Contact: Harish NV Rao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-01-25 20:59 UTC by Vincent S. Cojot
Modified: 2023-08-09 16:37 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-04-03 09:07:41 UTC
Embargoed:


Attachments (Terms of Use)

Description Vincent S. Cojot 2023-01-25 20:59:14 UTC
Description of problem (please be detailed as possible and provide log
snippests):

The PersistentVolume claimed by ocs4-image-registry-storage in Namespace Namespace
NS openshift-image-registry  only has 0% free inodes.


Version of all relevant components (if applicable):

ocp 4.12.0
odf 4.11.4 + localstorage 4.12.0


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

The volume is empty, there should not be an alert. The same setup worked fine with odf 4.10 and below.

Is there any workaround available to the best of your knowledge?

Nope

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1 - very simple


Can this issue reproducible?

every time I deploy OCP 4.11.z or OCP 4.12.z with ODF 4.11 stack

Can this issue reproduce from the UI?

Yes.

If this is a regression, please provide more details to justify this:

Seems like a regression.

Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:
No alarms

Additional info:

Comment 2 Vincent S. Cojot 2023-01-25 21:06:49 UTC
The PersistentVolume claimed by ocs4-image-registry-storage in Namespace Namespace NS openshift-image-registry
 only has 0% free inodes.

But:

~ oc project openshift-image-registry 
Already on project "openshift-image-registry" on server "https://api.ocp4d.openshift.lasthome.solace.krynn:6443".
~ oc get pvc
NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
image-registry-storage        Bound    nfs-registry-storage                       100Gi      RWX                                        167m
ocs4-image-registry-storage   Bound    pvc-94ba42bd-abf7-40f1-94a1-c7b081a857d1   2Ti        RWX            ocs-storagecluster-cephfs   125m
~ oc describe pvc ocs4-image-registry-storage
Name:          ocs4-image-registry-storage
Namespace:     openshift-image-registry
StorageClass:  ocs-storagecluster-cephfs
Status:        Bound
Volume:        pvc-94ba42bd-abf7-40f1-94a1-c7b081a857d1
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com
               volume.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      2Ti
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       image-registry-66c87f5f7f-n9qkw
Events:
  Type    Reason                 Age   From                                                                                                                      Message
  ----    ------                 ----  ----                                                                                                                      -------
  Normal  ExternalProvisioning   125m  persistentvolume-controller                                                                                               waiting for a volume to be created, either by external provisioner "openshift-storage.cephfs.csi.ceph.com" or manually created by system administrator
  Normal  Provisioning           125m  openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-6bf84b7448-lbcd2_044132e5-b735-476c-85ab-4b119b65a1e3  External provisioner is provisioning volume for claim "openshift-image-registry/ocs4-image-registry-storage"
  Normal  ProvisioningSucceeded  125m  openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-6bf84b7448-lbcd2_044132e5-b735-476c-85ab-4b119b65a1e3  Successfully provisioned volume pvc-94ba42bd-abf7-40f1-94a1-c7b081a857d1
~

Comment 3 Vincent S. Cojot 2023-01-25 21:09:26 UTC
~$ oc get pods
NAME                                              READY   STATUS    RESTARTS   AGE
cluster-image-registry-operator-fb799676d-9xm5n   1/1     Running   2          3h10m
image-registry-66c87f5f7f-n9qkw                   1/1     Running   0          128m
node-ca-5s46q                                     1/1     Running   1          152m
node-ca-7vzlh                                     1/1     Running   1          152m
node-ca-bps8l                                     1/1     Running   1          152m
node-ca-dj99t                                     1/1     Running   1          178m
node-ca-drm9b                                     1/1     Running   1          178m
node-ca-fsdps                                     1/1     Running   1          152m
node-ca-g7f4c                                     1/1     Running   2          178m
node-ca-hfbkk                                     1/1     Running   1          152m
node-ca-jvdfv                                     1/1     Running   1          178m
node-ca-p8n48                                     1/1     Running   1          152m
node-ca-vzwpb                                     1/1     Running   1          178m
node-ca-wwz74                                     1/1     Running   1          178m
~$ oc rsh image-registry-66c87f5f7f-n9qkw
sh-4.4$ df -ik /registry 
Filesystem                                                                                                                                               Inodes IUsed IFree IUse% Mounted on
172.30.84.102:6789,172.30.55.247:6789,172.30.233.132:6789:/volumes/csi/csi-vol-7d709fad-9ce2-11ed-a74f-0a580ae0100c/9939d8de-ffb2-4216-9518-86ed58593ab0   2789     -     -     - /registry
sh-4.4$ df -k /registry
Filesystem                                                                                                                                                1K-blocks  Used  Available Use% Mounted on
172.30.84.102:6789,172.30.55.247:6789,172.30.233.132:6789:/volumes/csi/csi-vol-7d709fad-9ce2-11ed-a74f-0a580ae0100c/9939d8de-ffb2-4216-9518-86ed58593ab0 2147483648     0 2147483648   0% /registry

the 2Ti volume is 100% empty

Comment 4 Vincent S. Cojot 2023-01-25 21:12:57 UTC
~$ oc get csv -n openshift-local-storage
NAME                                          DISPLAY              VERSION               REPLACES                    PHASE
local-storage-operator.v4.12.0-202301042354   Local Storage        4.12.0-202301042354                               Succeeded
netobserv-operator.v0.2.2                     NetObserv Operator   0.2.2                 netobserv-operator.v0.2.1   Succeeded
~$ oc get csv -n openshift-storage
NAME                              DISPLAY                       VERSION   REPLACES                          PHASE
mcg-operator.v4.11.4              NooBaa Operator               4.11.4    mcg-operator.v4.11.3              Succeeded
netobserv-operator.v0.2.2         NetObserv Operator            0.2.2     netobserv-operator.v0.2.1         Succeeded
ocs-operator.v4.11.4              OpenShift Container Storage   4.11.4    ocs-operator.v4.11.3              Succeeded
odf-csi-addons-operator.v4.11.4   CSI Addons                    4.11.4    odf-csi-addons-operator.v4.11.3   Succeeded
odf-operator.v4.11.4              OpenShift Data Foundation     4.11.4    odf-operator.v4.11.3              Succeeded

Comment 6 Juan Miguel Olmo 2023-04-03 09:07:41 UTC
This was solved in ODF 4.12. (not backported to ODF 4.11)

See:
https://bugzilla.redhat.com/show_bug.cgi?id=2132270

In the case the fix should be backported to ODF 4.11, PM should reopen the original bug asking for the backport.

*** This bug has been marked as a duplicate of bug 2132270 ***


Note You need to log in before you can comment on or make changes to this bug.