Bug 1626073
| Summary: | [ceph-volume]: ceph-volume simple scan and activate fails on a cluster with custom name. | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Parikshith <pbyregow> | ||||
| Component: | Ceph-Volume | Assignee: | Alfredo Deza <adeza> | ||||
| Status: | CLOSED NOTABUG | QA Contact: | Parikshith <pbyregow> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.1 | CC: | adeza, ceph-eng-bugs, ceph-qe-bugs | ||||
| Target Milestone: | rc | ||||||
| Target Release: | 3.* | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-09-07 13:33:19 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
For custom cluster names (although heavily discouraged and soon to be unsupported) the --cluster=CLUSTER_NAME is required, which I didn't see it was used in this ticket. So the following command: ceph-volume simple scan /var/lib/ceph/osd/master-2 Should instead be: ceph-volume --cluster=master simple scan /var/lib/ceph/osd/master-2 If you confirm the above commands work, please close this as NOTABUG The --cluster flag is a "global" flag, that is: it affect every single
sub-command. That is why it is part of the higher level of flags, as indicated by "ceph-volume --help"
Like I mentioned, please use:
ceph-volume --cluster=master simple scan /var/lib/ceph/osd/master-2
We are not going to change the positioning of the flag, since that would cause every other script that relies on the current position that is well documented to work unexpectedly.
If the command I suggested should be used works, please close this ticket as NOTABUG
|
Created attachment 1481330 [details] ceph-volume encrypted log Description of problem: ceph-volume simple scan and activate fails for different osd scenarios on a cluster with custom cluster name. Version-Release number of selected component (if applicable): ceph version 12.2.5-42.el7cp Steps with actual results: 1. Install a cluster with custom name containing non-encrypted and encrypted osds.(ceph-disk based osd) 2. Scan the ceph-disk based encrypted osds using ceph-volume simple scan, it fails as below. $ ceph-volume simple scan /var/lib/ceph/osd/master-2 Fails with: stderr: lsblk: /var/lib/ceph/osd/master-2: not a block device stderr: lsblk: /var/lib/ceph/osd/master-2: not a block device Running command: /usr/sbin/cryptsetup status /dev/mapper/4eb40aad-c132-4846-882c-74794cc8e66f Running command: /usr/sbin/cryptsetup status 4eb40aad-c132-4846-882c-74794cc8e66f Running command: /usr/sbin/cryptsetup status /dev/sdb3 stderr: Device sdb3 not found Running command: /bin/ceph --cluster ceph --name client.osd-lockbox.4eb40aad-c132-4846-882c-74794cc8e66f --keyring /var/lib/ceph/osd-lockbox/4eb40aad-c132-4846-882c-74794cc8e66f/keyring config-key get dm-crypt/osd/4eb40aad-c132-4846-882c-74794cc8e66f/luks stderr: 2018-09-06 12:06:15.775322 7f798d1d2700 -1 Errors while parsing config file! stderr: 2018-09-06 12:06:15.775326 7f798d1d2700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory stderr: 2018-09-06 12:06:15.775327 7f798d1d2700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory stderr: 2018-09-06 12:06:15.775328 7f798d1d2700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory stderr: Error initializing cluster client: ObjectNotFound('error calling conf_read_file',) --> RuntimeError: Unable to retrieve dmcrypt secret osd(encrypted) scan completed successfully by creating a symlink with ceph.conf. 3. But "ceph-volume simple activate" fails to activate osds for both encrypted and non-encrypted osds. encrypted: ------------- ceph-volume simple activate --file /etc/ceph/osd/2-4eb40aad-c132-4846-882c-74794cc8e66f.json Running command: ceph-authtool /var/lib/ceph/osd/ceph-2/lockbox.keyring --create-keyring --name client.osd-lockbox.4eb40aad-c132-4846-882c-74794cc8e66f --add-key AQBh+YxbPYCdIhAAsIAfa0m26Ey458ESaJisgA== stdout: creating /var/lib/ceph/osd/ceph-2/lockbox.keyring stdout: added entity client.osd-lockbox.4eb40aad-c132-4846-882c-74794cc8e66f auth auth(auid = 18446744073709551615 key=AQBh+YxbPYCdIhAAsIAfa0m26Ey458ESaJisgA== with 0 caps) stderr: bufferlist::write_file(/var/lib/ceph/osd/ceph-2/lockbox.keyring): failed to open file: (2) No such file or directory could not write /var/lib/ceph/osd/ceph-2/lockbox.keyring --> RuntimeError: command returned non-zero exit status: 1 [root@magna020 ubuntu]# ls //var/lib/ceph/osd/master-2/lockbox.keyring ls: cannot access //var/lib/ceph/osd/master-2/lockbox.keyring: No such file or directory non-encrypted: ---------------- ceph-volume simple activate --file /etc/ceph/osd/6-a8350805-2de1-4ec9-94f4-0ceabc187cd5.json Running command: mount -v /dev/sdb1 /var/lib/ceph/osd/ceph-6 stderr: mount: mount point /var/lib/ceph/osd/ceph-6 does not exist --> RuntimeError: command returned non-zero exit status: 32 Additional info: scan issue is observed only on encrypted osds, it works fine for non-encrypted osds. Attached ceph-volume logs for both encrypted and non-encrypted osds.