Bug 1388943

Summary: Adding OSD with journal on different drive steps missing from the Administration Guide
Product: Red Hat Ceph Storage Reporter: jquinn <jquinn>
Component: DocumentationAssignee: Bara Ancincova <bancinco>
Status: CLOSED CURRENTRELEASE QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.0CC: asriram, hnallurv, jquinn, kdreyer, khartsoe, ldachary, vashastr
Target Milestone: rc   
Target Release: 2.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-21 23:50:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description jquinn 2016-10-26 14:01:57 UTC
Description of problem:The Admin Guide references having the journal on a separate drive, but does not provide the steps to configure the OSD's this way.  Reference Section 6.3.2 and 6.3.3, both ansible and CLI should have the steps listed.


Version-Release number of selected component (if applicable):RHCS 2.0


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:Would like to have the steps added to the Documentation.  I have not located the steps on any internal documents. 


Additional info:

Comment 2 jquinn 2016-10-26 14:04:04 UTC
(In reply to jquinn from comment #0)
> Description of problem:The Admin Guide references having the journal on a
> separate drive, but does not provide the steps to configure the OSD's this
> way.  Reference Section 6.3.2 and 6.3.3, both ansible and CLI should have
> the steps listed.
> 

https://access.redhat.com/webassets/avalon/d/Red_Hat_Ceph_Storage-2-Administration_Guide-en-US/Red_Hat_Ceph_Storage-2-Administration_Guide-en-US.pdf

> 
> Version-Release number of selected component (if applicable):RHCS 2.0
> 
> 
> How reproducible:
> 
> 
> Steps to Reproduce:
> 1.
> 2.
> 3.
> 
> Actual results:
> 
> 
> Expected results:Would like to have the steps added to the Documentation.  I
> have not located the steps on any internal documents. 
> 
> 
> Additional info:

Added link to the Admin Guide

Comment 6 jquinn 2017-01-21 19:39:05 UTC
Hi Bara, 

Sorry for the delay, I must have missed that variable in the ansible guide.  That looks good, i'll have to run through it at some point to test everything out.  

As for the CLI, when I went through this in my lab I followed the below guide from ceph.com to create the osd's with seperate journal.  These would most likely be the steps that we should have in the docs on the customer portal as well. 

http://docs.ceph.com/docs/jewel/rados/deployment/ceph-deploy-osd/

Let me know if you need further information on creating the OSD's from CLI

Thanks, 
Joe

Comment 10 Vasishta 2017-02-14 11:04:32 UTC
Hi Bara,

I tried same steps mentioned in the doc but couldn't add a new OSD. I got error message '** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-3: (13) Permission denied' on logs, when it failed to start osd service. Apart from this, doc needs some small changes as listed below. Can you please have a look ?


A. In step 2 of subsection 'Before you Start'

1) If convention of command template is followed, a new file will be created in /etc file in the name 'ceph' and keyring/config file gets copied to the file.
 
# scp root@vm1:/etc/ceph/ceph.conf /etc/ceph
root@vm1's password: 
ceph.conf                                                                                    100%  970     1.0KB/s   00:00  
# ls -l /etc/ceph
-rw-r--r--. 1 root root 970 Feb 13 10:18 /etc/ceph

2) Using example commands will result in error as copied below

# ls -l /etc |grep ceph
# scp root@vm1:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
root@vm1's password: 
/etc/ceph/: Is a directory


So, I think an extra command to create ceph directory in /etc has to be given before command given for copying keyring.

[root@vm5 ~]# mkdir /etc/ceph
[root@vm5 ~]# scp root@vm1:/etc/ceph/ceph.client.admin.keyring /etc/ceph
root@vm1's password:
ceph.client.admin.keyring                                                                    100%   63     0.1KB/s   00:00   
[root@vm5 ~]# ls /etc/ceph
ceph.client.admin.keyring   


B. In subsection Initializing the OSD Data and Journal Directory and Registering the OSD Authentication Key, command given to Create the journal on the journal disk (step iii of step 2) needs to be changed

Given -

ceph-osd -i <osd-id> —mkjournal
For example:
# ceph-osd -i 4 -mkjournal

Appropriate version -

ceph-osd -i <osd-id> --mkjournal
For example:
# ceph-osd -i 4 --mkjournal


C. Step 5 of subsection Preparing the OSD Data and Journal Drives needs a small change, which is being tracked under BZ 1379188


Regards,
Vasishta

Comment 13 Vasishta 2017-02-14 15:24:36 UTC
Hi Bara,

'root' is the owner of the log and SELinux was enabled.
Just before trying to start OSD, I tried changing the defaultlabel as you mentioned, that didn't work.
Just for the information, I'm trying on VMs. 

Regards,
Vasishta

Comment 16 Vasishta 2017-02-16 16:17:21 UTC
Hi Bara,

Sorry for being late. Yes, updating owner permission was the thing I had done wrong by getting confused with owner name and cluster user name.Sorry.

Following is a list of of steps where '--cluster <cluster_name>' option is needed if user is going to use a cluster of custom cluster name. Either this argument can be added in both syntax and command or a note can be given as it has been done for step 1 of subsection initializing-the-osd-data-directory-and-registering-the-osd-authentication-key or the note can be generalized. 


1) Step 3 of subsection installing-ceph-osd-and-creating-a-new-osd-instance.
2) Step iii of step 2 in initializing-the-osd-data-directory-and-registering-the-osd-authentication-key.
3) Step 3 of initializing-the-osd-data-directory-and-registering-the-osd-authentication-key.
4) Syntax and Example of adding-the-new-osd-node-to-the-crush-map

Please let me know if there are any concerns or issues.

Regards,
Vasishta