Description of problem: In ceph-ansible we set vfs_cache_pressure at 50 by default. It may create issues if a storage node has a lot of memory and a lot of objects (http://tracker.ceph.com/issues/12405) I think the reasonable thing to do would be to leave it default (100).
How strong is the consensus on this?
Would a core dev please let us know what the optimum behavior is here for ceph-ansible?
(In reply to Ken Dreyer (Red Hat) from comment #3) > Would a core dev please let us know what the optimum behavior is here for > ceph-ansible? I'd recommend not setting vfs_cache_pressure in ceph-ansible. The syncfs issue is still there, and has caused real problems in the past, whereas there hasn't been good data showing lower vfs_cache_pressure is very helpful - the only cases I'm aware of have shown it makes little difference to performance.
Thanks Josh, PR @ https://github.com/ceph/ceph-ansible/pull/1347
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1496