Description of problem: When using `ceph-deploy osd create` the `noin` flag is ignored, resulting in immediate rebalancing. This is also true of `ceph-deploy osd prepare` Version-Release number of selected component (if applicable):0.80.7, 0.67.11 How reproducible: Very Steps to Reproduce: 1.Set `noin` flag 2.Deploy OSDs 3.Watch as OSDs are created and go 'in' Actual results: cluster 50cb9f72-84b3-44c8-aed9-fbaf95aa7b5a health HEALTH_WARN noin flag(s) set ... osdmap e737806: 417 osds: 417 up, 417 in ... <run ceph-deploy> ceph-s ... osdmap e737833: 418 osds: 418 up, 418 in Expected results: That osds would not be brought 'in' while the noin flag is set. Additional info:
This may eventually be backported once it's fixed (it's not fixed yet). Do I NACK it now since it isn't going in for 1.3.0, or leave it in limbo until it's eventually backported?
Sam - do you know if this bug exists in Ceph core, or in ceph-disk? Reassigning, and declining for 1.3.0 for now.
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.
moving to Z stream.
(In reply to Travis Rhoden from comment #4) > Sam - do you know if this bug exists in Ceph core, or in ceph-disk? Need more information from Sam in this case
I'm not sure where the bug is, I'll need to look into it.
tested with hammer and latest master. i am not able to reproduce this issue: # create a cluster with 3 OSDs with id of 0,1,2. $ ceph -s *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cluster 7cd2c892-a7e1-4945-8195-ed422e730ff7 health HEALTH_OK monmap e1: 3 mons at {a=127.0.0.1:6789/0,b=127.0.0.1:6790/0,c=127.0.0.1:6791/0} election epoch 6, quorum 0,1,2 a,b,c osdmap e9: 3 osds: 3 up, 3 in pgmap v62: 8 pgs, 1 pools, 0 bytes data, 0 objects 540 GB used, 123 GB / 664 GB avail 8 active+clean $ ceph osd set noin $ ceph osd create 802dd6d2-8add-45ae-a782-b91a03f18a47 $ ceph osd crush add osd.3 1.0 host=rex001 root=default # mkdir /var/lib/ceph/osd3 # ceph-osd -i 3 --mkfs --mkkey --osd-uuid 802dd6d2-8add-45ae-a782-b91a03f18a47 # ceph -i dev/osd3/keyring auth add osd.3 osd "allow *" mon "allow profile osd" # ceph-osd -i 3 -c /home/kchai/dev/ceph/src/ceph.conf $ ceph -s *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cluster 7cd2c892-a7e1-4945-8195-ed422e730ff7 health HEALTH_WARN noin flag(s) set monmap e1: 3 mons at {a=127.0.0.1:6789/0,b=127.0.0.1:6790/0,c=127.0.0.1:6791/0} election epoch 6, quorum 0,1,2 a,b,c osdmap e14: 4 osds: 4 up, 3 in flags noin pgmap v82: 8 pgs, 1 pools, 0 bytes data, 0 objects 541 GB used, 123 GB / 664 GB avail 8 active+clean so it's not likely an issue in ceph core. so this bug is not in ceph core.
Next stop: ceph-disk. Loic can you please check if this is an issue in ceph-disk?
@Ken ceph-disk does not know or influence noin.
i read the noin related code in monitor again, it appears that "noin" is handled correctly. If we can have - the ceph-deploy log, so we can see if ceph-deploy unsets "noin" - and the mon log with "debug ms = 1", "debug mon = 10", so we know if "noin" is unset or correctly handled. probably that's would be helpful.
I can't reproduce this on ceph-0.94.3-3.el7cp.x86_64 so I guess it's fixed?
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days