DescriptionPersona non grata
2018-05-28 06:01:53 UTC
Description of problem:
With cluster config 1 Mon+1 Mgr,4 MDSs,4 OSDs and 4 Clients,3 MDSs are getting active after fresh cluster setup with 1 as standby, which used to be 1 active mds and 3 standbys
Version-Release number of selected component (if applicable):
ceph : ceph-12.2.5-12.el7cp
os: Red Hat Enterprise Linux Server release 7.5 (Maipo)
How reproducible:
Always
Steps to Reproduce:
1.Set up a ceph 3.1 cluster with configs above mentioned.
2.Check ceph health ,should see 1 active mds with 3 standby
Actual results:
id: 897b8555-f20a-48d5-9d8a-4cb03ea26ee3
health: HEALTH_WARN
too few PGs per OSD (20 < min 30)
services:
mon: 1 daemons, quorum ceph-sshreeka-run125-node1-monmgrinstaller
mgr: ceph-sshreeka-run125-node1-monmgrinstaller(active)
mds: cephfs-3/3/3 up {0=ceph-sshreeka-run125-node6-mds=up:active,1=ceph-sshreeka-run125-node4-mds=up:active,2=ceph-sshreeka-run125-node3-mds=up:active}, 1 up:standby
osd: 12 osds: 12 up, 12 in
data:
pools: 3 pools, 80 pgs
objects: 57 objects, 4870 bytes
usage: 1298 MB used, 346 GB / 347 GB avail
pgs: 80 active+clean
Expected results:
Should have 1 active-mds
Additional info:
Log of ansible: http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1527140513806/ceph_ansible_0.log
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2018:2819
Description of problem: With cluster config 1 Mon+1 Mgr,4 MDSs,4 OSDs and 4 Clients,3 MDSs are getting active after fresh cluster setup with 1 as standby, which used to be 1 active mds and 3 standbys Version-Release number of selected component (if applicable): ceph : ceph-12.2.5-12.el7cp os: Red Hat Enterprise Linux Server release 7.5 (Maipo) How reproducible: Always Steps to Reproduce: 1.Set up a ceph 3.1 cluster with configs above mentioned. 2.Check ceph health ,should see 1 active mds with 3 standby Actual results: id: 897b8555-f20a-48d5-9d8a-4cb03ea26ee3 health: HEALTH_WARN too few PGs per OSD (20 < min 30) services: mon: 1 daemons, quorum ceph-sshreeka-run125-node1-monmgrinstaller mgr: ceph-sshreeka-run125-node1-monmgrinstaller(active) mds: cephfs-3/3/3 up {0=ceph-sshreeka-run125-node6-mds=up:active,1=ceph-sshreeka-run125-node4-mds=up:active,2=ceph-sshreeka-run125-node3-mds=up:active}, 1 up:standby osd: 12 osds: 12 up, 12 in data: pools: 3 pools, 80 pgs objects: 57 objects, 4870 bytes usage: 1298 MB used, 346 GB / 347 GB avail pgs: 80 active+clean Expected results: Should have 1 active-mds Additional info: Log of ansible: http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1527140513806/ceph_ansible_0.log