Tagged component(io-cache) as i don't know below issue related to which component in glusterfs.
Description of problem:
We are running etcd in kubernetes with glusterfs as backend storage.
Version-Release number of selected component (if applicable):
Etcd - 3.4.3
Mount etcd-pod to glusterfs in kubernetes.
Continuously keep writing key to etcd database using the following command.
while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024 | ETCDCTL_API=3 etcdctl put key || break; done
Check the logs of etcd using kubectl logs -f <etcd-pod>
you will get to see below warning.
2020-01-03 11:50:06.591364 W | etcdserver: request "header:<ID:2814309323035058006 > put:<key:"key" value_size:1048576 >" with result "size:5" took too long (1.052778581s) to execute
2020-01-03 11:50:07.529154 W | etcdserver: request "header:<ID:2814309323035058007 > put:<key:"key" value_size:1048576 >" with result "size:5" took too long (775.427962ms) to execute
Performance is decreasing when running for long time.
Shouldn't see above warning , writing to glusterfs mount should be fast and smooth.
We are running kafka , logstash and influxdb also in kubernetes with glusterfs as backed storage .
we have tested below scenarios
case 1- we have mounted etcd to hostpath and kafka to glusterfs
we are not seeing above issue(warning) , performance was good.
case2 - etcd with glusterfs and kafka hostpath
seeing above issue , performance is very bad.
Note: we haven't not tunned any glusterfs parameters explicitly
we are running glusterfs with default configurations( as what glusterfs comes up with ).
This bug is moved to https://github.com/gluster/glusterfs/issues/870, and will be tracked there from now on. Visit GitHub issues URL for further details