Description of problem: The api /api/mon/configure fails with error if ceph bits are already installed on the node Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Install mon node using /api/mon/install 2. Create a cluster using /api/mon/configure 3. Remove the cluster 4. Try to create cluster using the same node again (here ceph bits are already installed on the node) Actual results: Mon configure fails with below error TASK: [ceph-mon | create monitor initial keyring] ***************************** failed: [dhcp47-100.lab.eng.blr.redhat.com] => {"changed": true, "cmd": ["ceph-authtool", "/var/lib/ceph/tmp/keyring.mon.dhcp47-100", "--create-keyring", "--name=mon.", "--add-key=AQA7P8dWAAAAABAAH/tbiZQn/40Z8pr959UmEA==", "--cap", "mon", "allow *"], "delta": "0:00:00.070208", "end": "2016-04-28 17:22:15.399181", "rc": 1, "start": "2016-04-28 17:22:15.328973", "warnings": []} stderr: bufferlist::write_file(/var/lib/ceph/tmp/keyring.mon.dhcp47-100): failed to open file: (2) No such file or directory could not write /var/lib/ceph/tmp/keyring.mon.dhcp47-100 stdout: creating /var/lib/ceph/tmp/keyring.mon.dhcp47-100 added entity mon. auth auth(auid = 18446744073709551615 key=AQA7P8dWAAAAABAAH/tbiZQn/40Z8pr959UmEA== with 0 caps) FATAL: all hosts have already failed -- aborting Expected results: It should pass without any error Additional info:
Where is this workflow (leaving ceph installed and reconfiguring) coming from? I wasn't aware that such behavior would require support. It would be interesting to know more about step #4. What does "create cluster using the same node again" entails? These are just a few questions, but full details on the whole process so that we can actually try to reproduce such a workflow would be ideal: * was there an existing ceph.conf in /etc/ceph ? * I logged into dhcp47-100 and saw that /var/lib/ceph/tmp exists, was that not there when the error happened?
Closing this as we aren't aware of such constraints and we didn't get any details to be able to reproduce. Feel free to re-open if this is an actual workflow requirement with the needed information to be able to actually replicate.
This used to happen as ceph-installer creates some dirs which are not done next time if ceph bits are already there. Anyway as this is not a very usual scenario, so its ok to close.