Bug 1333442

Summary: [ceph-ansible] osd's not coming up after addition to existing ubuntu ceph cluster
Product: [Red Hat Storage] Red Hat Storage Console Reporter: Tejas <tchandra>
Component: ceph-installerAssignee: Andrew Schoen <aschoen>
Status: CLOSED DUPLICATE QA Contact: sds-qe-bugs
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2CC: adeza, aschoen, ceph-eng-bugs, hnallurv, kdreyer, nthomas, sankarshan
Target Milestone: ---   
Target Release: 2   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-09 19:57:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Tejas 2016-05-05 14:01:40 UTC
Description of problem:
When I add Osd to existing cluster, the osd processes are not starting,
and they are not adding to the cluster.
the disk partitions are created.

Version-Release number of selected component (if applicable):
ceph: 10.2.0-4redhat1xenial
ceph-ansible: 1.0.5-7

How reproducible:
Always

Steps to Reproduce:
1. Create ubuntu cluster with 2 mons and 2 osd's.
2. Try to add osd to this cluster using osd-configure.yml
3. The playbook completes successfully, but osd is not activated.

Actual results:
OSD is not activated after adding to existing cluster.

Expected results:
OSD should be successfully activated

Additional info:

existing cluster:
root@magna009:~# ceph osd tree
ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 5.39694 root default                                        
-2 2.69847     host magna058                                   
 0 0.89949         osd.0          up  1.00000          1.00000 
 3 0.89949         osd.3          up  1.00000          1.00000 
 5 0.89949         osd.5          up  1.00000          1.00000 
-3 2.69847     host magna077                                   
 1 0.89949         osd.1          up  1.00000          1.00000 
 2 0.89949         osd.2          up  1.00000          1.00000 
 4 0.89949         osd.4          up  1.00000          1.00000 


PLAY RECAP ******************************************************************** 
magna009                   : ok=4    changed=1    unreachable=0    failed=0   
magna046                   : ok=152  changed=9    unreachable=0    failed=0   
magna052                   : ok=4    changed=1    unreachable=0    failed=0  

root@magna046:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             16G     0   16G   0% /dev
tmpfs           3.2G  9.0M  3.2G   1% /run
/dev/sda1       917G  3.3G  867G   1% /
tmpfs            16G     0   16G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            16G     0   16G   0% /sys/fs/cgroup
tmpfs           3.2G     0  3.2G   0% /run/user/1000
root@magna046:~# 


root@magna046:~# service ceph status
* ceph.service - LSB: Start Ceph distributed file system daemons at boot time
   Loaded: loaded (/etc/init.d/ceph; bad; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:systemd-sysv-generator(8)


Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FB16F7A4-03B5-43AF-A426-06C13783EE8B

Device        Start        End    Sectors   Size Type
/dev/sdb1  20482048 1953525134 1933043087 921.8G Ceph OSD
/dev/sdb2      2048   20480000   20477953   9.8G Ceph Journal

Partition table entries are not in disk order.


Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 16CB459B-409C-4551-94C3-6F0BF5DE4F53

Device        Start        End    Sectors   Size Type
/dev/sdc1  20482048 1953525134 1933043087 921.8G Ceph OSD
/dev/sdc2      2048   20480000   20477953   9.8G Ceph Journal

Partition table entries are not in disk order.


Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 34F6B867-4595-43E5-8F85-1B706C2D6B7F

Device        Start        End    Sectors   Size Type
/dev/sdd1  20482048 1953525134 1933043087 921.8G Ceph OSD
/dev/sdd2      2048   20480000   20477953   9.8G Ceph Journal

Comment 2 Ken Dreyer (Red Hat) 2016-05-05 14:06:35 UTC
Please note that you don't want to use "service ceph status", or "service <anything>", since that is SysV, and Xenial uses systemd instead.

You can check the services with "systemctl". For example, the following commands will show the status of any ceph-mon or ceph-osd services on your system:

  systemctl status ceph-mon@*

  systemctl status ceph-osd@*

Comment 4 Andrew Schoen 2016-05-06 20:33:50 UTC
I think this will be fixed by https://bugzilla.redhat.com/show_bug.cgi?id=1331881

OSDs are not coming up because ceph-ansible is trying to use upstart, not systemd on xenial.

Comment 5 Andrew Schoen 2016-05-09 19:57:25 UTC
I'm closing this as a duplicate of bug 1331881

*** This bug has been marked as a duplicate of bug 1331881 ***