Bug 1695099
Summary: | The number of glusterfs processes keeps increasing, using all available resources | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Christian Ihle <christian.ihle> |
Component: | glusterd | Assignee: | bugs <bugs> |
Status: | CLOSED DUPLICATE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 5 | CC: | amukherj, bugs, christian.ihle |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-05-08 06:45:33 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Christian Ihle
2019-04-02 13:21:23 UTC
Example of how to reliably reproduce the issue from Kubernetes. 1. kubectl apply -f pvc.yaml 2. kubectl delete -f pvc.yaml There will almost always be a few more glusterfs-processes running after doing this. pvc.yaml: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: glusterfs-replicated-2 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc2 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: glusterfs-replicated-2 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc3 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: glusterfs-replicated-2 I have been experimenting with setting "max_inflight_operations" to 1 in Heketi, as mentioned in https://github.com/heketi/heketi/issues/1439 Example of how to configure this: https://github.com/heketi/heketi/blob/8417f25f474b0b16e1936a66f9b63bcedfba6e4c/tests/functional/TestSmokeTest/config/heketi.json I am not able to reproduce the issue anymore when the value is set to 1. The number of glusterfs-processes varies between 0 and 2 during volume changes, but always settles on 1 single process afterwards. This seems to be an easy workaround, but hopefully the bug will be fixed so I can revert back to concurrent Heketi again. Oh, just noticed I wrote CentOS 7.6 only. We use RedHat 7.6 on our main servers, but the issue is the same on both CentOS and RedHat. I see from the release notes of 5.6 that this issue is resolved: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Looks like it may be the same as this. I will test 5.6 once it's out. Please let us know if you have tested 5.6 and see this problem disappearing. I have tested 5.6 and have so far been unable to reproduce the problem. Looks like the problem is fixed. *** This bug has been marked as a duplicate of bug 1696147 *** |