Bug 1297956 - Documentation :- Ceph-deploy :- Add OSD :- Update osd activate example
Documentation :- Ceph-deploy :- Add OSD :- Update osd activate example
Status: CLOSED CURRENTRELEASE
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation (Show other bugs)
1.3.2
x86_64 Linux
unspecified Severity medium
: rc
: 1.3.2
Assigned To: Bara Ancincova
ceph-qe-bugs
: Documentation, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-12 15:43 EST by Rachana Patel
Modified: 2016-03-01 03:21 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-03-01 03:21:57 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rachana Patel 2016-01-12 15:43:38 EST
Description of problem:
=======================
Quick Ceph Deploy have section about adding OSD which reads as below

--->
Once you prepare OSDs, use ceph-deploy to activate the OSDs.

ceph-deploy osd activate <ceph-node>:<data-drive>:<journal-partition> [<ceph-node>:<data-drive>:<journal-partition>]
For example:

ceph-deploy osd activate node2:sdb:ssdb node3:sdd:ssdb node4:sdd:ssdb

--->

was following it and run command as mentioned in document but it failed.

It works if I give 'node:/dev/sdc1' instead of 'node:/dev/sdc'


Version-Release number of selected component (if applicable):
==============================================================
0.94.5-1.el7cp.x86_64



How reproducible:
=================
always


Steps to Reproduce:
1.follow the instruction mentioned in quick ceph deploy command for adding OSD.

[ceph1@xxx ceph-config]$ ceph-deploy osd prepare node1:/dev/sdc  node2:/dev/sdc node3:/dev/sdc

[ceph1@xxx ceph-config]$ ceph-deploy osd activate node1:/dev/sdc  node2:/dev/sdc node3:/dev/sdc


Actual results:
===============
[ceph1@xxx ceph-config]$ ceph-deploy osd activate node1:/dev/sdc  node2:/dev/sdc node3:/dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph1/ceph-config/cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.27.3): /usr/bin/ceph-deploy osd activate magna038:/dev/sdc magna065:/dev/sdc magna076:/dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd1e2993ef0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fd1e2984398>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('magna038', '/dev/sdc', None), ('magna065', '/dev/sdc', None), ('magna076', '/dev/sdc', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks magna038:/dev/sdc: magna065:/dev/sdc: magna076:/dev/sdc:
[magna038][DEBUG ] connection detected need for sudo
[magna038][DEBUG ] connected to host: magna038 
[magna038][DEBUG ] detect platform information from remote host
[magna038][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.1 Maipo
[ceph_deploy.osd][DEBUG ] activating host magna038 disk /dev/sdc
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[magna038][INFO  ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdc
[magna038][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdc
[magna038][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[magna038][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[magna038][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdc on /var/lib/ceph/tmp/mnt.o1O0eq with options noatime,inode64
[magna038][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc /var/lib/ceph/tmp/mnt.o1O0eq
[magna038][WARNIN] mount: /dev/sdc is already mounted or /var/lib/ceph/tmp/mnt.o1O0eq busy
[magna038][WARNIN] ceph-disk: Mounting filesystem failed: Command '['/usr/bin/mount', '-t', 'xfs', '-o', 'noatime,inode64', '--', '/dev/sdc', '/var/lib/ceph/tmp/mnt.o1O0eq']' returned non-zero exit status 32
[magna038][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdc


Expected results:
==================
command should mount device and OSD should be up.


Additional info:
================
activate command was completed successfully when I gave :-

[ceph1@xxx ceph-config]$ ceph-deploy osd activate node1:/dev/sdc1  node2:/dev/sdc1 node3:/dev/sdc1
Comment 2 Ken Dreyer (Red Hat) 2016-01-28 12:17:08 EST
The current example is not entirely "wrong", because you can technically format an entire device (in this case, /dev/sdb in the current docs) with xfs and use that for the OSD data, and then use a second, dedicated device (in this case, /dev/ssdb in the current docs) for the OSD journal.

The problem is that A) this is an advanced use case, and B) it doesn't align with the "prepare" step that we tell users to run, so new users can't simply copy and paste from our docs.

As Rachana points out, often we just do tests with a single device, partitioned into two halves. In fact, if you run "ceph-deploy osd prepare node1:/dev/sdc" as in Rachana's example above, that is what you end up with: one "sdc" disk, with "sdc1" and "sdc2" partitions.

Maybe we should have two examples. The first example should be the "basic" use case, with a single drive (eg "sdc"), where the "activate" command exactly matches the "prepare" example that we give. This will make it easier for users to simply copy-and-paste from the docs into the command-line.

For the second example, we can give some context, like "this is an advanced use case involving multiple drives, wholly formatted, without partition tables", so it's clearer. Better still if we pair this with the "ceph-deploy prepare" invocation that a user would need to run to set up such a layout :)
Comment 3 Ken Dreyer (Red Hat) 2016-01-28 12:30:18 EST
bug 1232445 also has some suggestions on how to improve this section.
Comment 7 Hemanth Kumar 2016-02-03 04:03:08 EST
Verified the changes added for ceph-deploy osd activate. Looks good to me
Moving to Verified state.

Note You need to log in before you can comment on or make changes to this bug.