Bug 1198327
| Summary: | OSD noin behavior is inconsistent. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Tupper Cole <tcole> |
| Component: | Distribution | Assignee: | Loic Dachary <ldachary> |
| Status: | CLOSED WORKSFORME | QA Contact: | Warren <wusui> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 1.3.0 | CC: | bhubbard, flucifre, hnallurv, icolle, kchai, kdreyer, ldachary, sjust, tcole, trhoden |
| Target Milestone: | rc | Keywords: | Reopened |
| Target Release: | 1.3.4 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-09-22 14:28:09 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Tupper Cole
2015-03-03 19:45:03 UTC
This may eventually be backported once it's fixed (it's not fixed yet). Do I NACK it now since it isn't going in for 1.3.0, or leave it in limbo until it's eventually backported? Sam - do you know if this bug exists in Ceph core, or in ceph-disk? Reassigning, and declining for 1.3.0 for now. Development Management has reviewed and declined this request. You may appeal this decision by reopening this request. moving to Z stream. (In reply to Travis Rhoden from comment #4) > Sam - do you know if this bug exists in Ceph core, or in ceph-disk? Need more information from Sam in this case I'm not sure where the bug is, I'll need to look into it. tested with hammer and latest master. i am not able to reproduce this issue:
# create a cluster with 3 OSDs with id of 0,1,2.
$ ceph -s
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
cluster 7cd2c892-a7e1-4945-8195-ed422e730ff7
health HEALTH_OK
monmap e1: 3 mons at {a=127.0.0.1:6789/0,b=127.0.0.1:6790/0,c=127.0.0.1:6791/0}
election epoch 6, quorum 0,1,2 a,b,c
osdmap e9: 3 osds: 3 up, 3 in
pgmap v62: 8 pgs, 1 pools, 0 bytes data, 0 objects
540 GB used, 123 GB / 664 GB avail
8 active+clean
$ ceph osd set noin
$ ceph osd create 802dd6d2-8add-45ae-a782-b91a03f18a47
$ ceph osd crush add osd.3 1.0 host=rex001 root=default
# mkdir /var/lib/ceph/osd3
# ceph-osd -i 3 --mkfs --mkkey --osd-uuid 802dd6d2-8add-45ae-a782-b91a03f18a47
# ceph -i dev/osd3/keyring auth add osd.3 osd "allow *" mon "allow profile osd"
# ceph-osd -i 3 -c /home/kchai/dev/ceph/src/ceph.conf
$ ceph -s
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
cluster 7cd2c892-a7e1-4945-8195-ed422e730ff7
health HEALTH_WARN
noin flag(s) set
monmap e1: 3 mons at {a=127.0.0.1:6789/0,b=127.0.0.1:6790/0,c=127.0.0.1:6791/0}
election epoch 6, quorum 0,1,2 a,b,c
osdmap e14: 4 osds: 4 up, 3 in
flags noin
pgmap v82: 8 pgs, 1 pools, 0 bytes data, 0 objects
541 GB used, 123 GB / 664 GB avail
8 active+clean
so it's not likely an issue in ceph core.
so this bug is not in ceph core.
Next stop: ceph-disk. Loic can you please check if this is an issue in ceph-disk? @Ken ceph-disk does not know or influence noin. i read the noin related code in monitor again, it appears that "noin" is handled correctly. If we can have - the ceph-deploy log, so we can see if ceph-deploy unsets "noin" - and the mon log with "debug ms = 1", "debug mon = 10", so we know if "noin" is unset or correctly handled. probably that's would be helpful. I can't reproduce this on ceph-0.94.3-3.el7cp.x86_64 so I guess it's fixed? The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |