Bug 1297956 - Documentation :- Ceph-deploy :- Add OSD :- Update osd activate example
Summary: Documentation :- Ceph-deploy :- Add OSD :- Update osd activate example
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 1.3.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 1.3.2
Assignee: Bara Ancincova
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-12 20:43 UTC by Rachana Patel
Modified: 2016-03-01 08:21 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-01 08:21:57 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1232445 1 None None None 2021-01-20 06:05:38 UTC

Internal Links: 1232445

Description Rachana Patel 2016-01-12 20:43:38 UTC
Description of problem:
=======================
Quick Ceph Deploy have section about adding OSD which reads as below

--->
Once you prepare OSDs, use ceph-deploy to activate the OSDs.

ceph-deploy osd activate <ceph-node>:<data-drive>:<journal-partition> [<ceph-node>:<data-drive>:<journal-partition>]
For example:

ceph-deploy osd activate node2:sdb:ssdb node3:sdd:ssdb node4:sdd:ssdb

--->

was following it and run command as mentioned in document but it failed.

It works if I give 'node:/dev/sdc1' instead of 'node:/dev/sdc'


Version-Release number of selected component (if applicable):
==============================================================
0.94.5-1.el7cp.x86_64



How reproducible:
=================
always


Steps to Reproduce:
1.follow the instruction mentioned in quick ceph deploy command for adding OSD.

[ceph1@xxx ceph-config]$ ceph-deploy osd prepare node1:/dev/sdc  node2:/dev/sdc node3:/dev/sdc

[ceph1@xxx ceph-config]$ ceph-deploy osd activate node1:/dev/sdc  node2:/dev/sdc node3:/dev/sdc


Actual results:
===============
[ceph1@xxx ceph-config]$ ceph-deploy osd activate node1:/dev/sdc  node2:/dev/sdc node3:/dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph1/ceph-config/cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.27.3): /usr/bin/ceph-deploy osd activate magna038:/dev/sdc magna065:/dev/sdc magna076:/dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd1e2993ef0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fd1e2984398>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('magna038', '/dev/sdc', None), ('magna065', '/dev/sdc', None), ('magna076', '/dev/sdc', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks magna038:/dev/sdc: magna065:/dev/sdc: magna076:/dev/sdc:
[magna038][DEBUG ] connection detected need for sudo
[magna038][DEBUG ] connected to host: magna038 
[magna038][DEBUG ] detect platform information from remote host
[magna038][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.1 Maipo
[ceph_deploy.osd][DEBUG ] activating host magna038 disk /dev/sdc
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[magna038][INFO  ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdc
[magna038][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdc
[magna038][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[magna038][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[magna038][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdc on /var/lib/ceph/tmp/mnt.o1O0eq with options noatime,inode64
[magna038][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc /var/lib/ceph/tmp/mnt.o1O0eq
[magna038][WARNIN] mount: /dev/sdc is already mounted or /var/lib/ceph/tmp/mnt.o1O0eq busy
[magna038][WARNIN] ceph-disk: Mounting filesystem failed: Command '['/usr/bin/mount', '-t', 'xfs', '-o', 'noatime,inode64', '--', '/dev/sdc', '/var/lib/ceph/tmp/mnt.o1O0eq']' returned non-zero exit status 32
[magna038][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdc


Expected results:
==================
command should mount device and OSD should be up.


Additional info:
================
activate command was completed successfully when I gave :-

[ceph1@xxx ceph-config]$ ceph-deploy osd activate node1:/dev/sdc1  node2:/dev/sdc1 node3:/dev/sdc1

Comment 2 Ken Dreyer (Red Hat) 2016-01-28 17:17:08 UTC
The current example is not entirely "wrong", because you can technically format an entire device (in this case, /dev/sdb in the current docs) with xfs and use that for the OSD data, and then use a second, dedicated device (in this case, /dev/ssdb in the current docs) for the OSD journal.

The problem is that A) this is an advanced use case, and B) it doesn't align with the "prepare" step that we tell users to run, so new users can't simply copy and paste from our docs.

As Rachana points out, often we just do tests with a single device, partitioned into two halves. In fact, if you run "ceph-deploy osd prepare node1:/dev/sdc" as in Rachana's example above, that is what you end up with: one "sdc" disk, with "sdc1" and "sdc2" partitions.

Maybe we should have two examples. The first example should be the "basic" use case, with a single drive (eg "sdc"), where the "activate" command exactly matches the "prepare" example that we give. This will make it easier for users to simply copy-and-paste from the docs into the command-line.

For the second example, we can give some context, like "this is an advanced use case involving multiple drives, wholly formatted, without partition tables", so it's clearer. Better still if we pair this with the "ceph-deploy prepare" invocation that a user would need to run to set up such a layout :)

Comment 3 Ken Dreyer (Red Hat) 2016-01-28 17:30:18 UTC
bug 1232445 also has some suggestions on how to improve this section.

Comment 7 Hemanth Kumar 2016-02-03 09:03:08 UTC
Verified the changes added for ceph-deploy osd activate. Looks good to me
Moving to Verified state.


Note You need to log in before you can comment on or make changes to this bug.