Bug 1468840 - RHCS parameter setuser_match_path = /var/lib/ceph/$type/$cluster-$id in ceph.conf causes add of new OSD to fail
Summary: RHCS parameter setuser_match_path = /var/lib/ceph/$type/$cluster-$id in ceph....
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 2.3
Hardware: All
OS: All
unspecified
low
Target Milestone: rc
: 2.*
Assignee: Josh Durgin
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-08 19:04 UTC by jquinn
Modified: 2020-12-14 09:04 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-30 22:23:42 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 19642 0 None None None 2017-07-08 19:04:54 UTC
Red Hat Knowledge Base (Solution) 3121291 0 None None None 2017-07-24 12:16:56 UTC

Description jquinn 2017-07-08 19:04:55 UTC
Description of problem: customer is upgrading from RHCS 1.3.x to RHCS 2.x, but they wish to avoid the chown that is needed on /var/lib/ceph/osd due to the time it takes to complete.  In RHCS 1.3.x, the OSD daemons run as root, while in RHCS 2.x they run as ceph user.  The directories for each of the OSD's (/var/lib/ceph/osd/ceph-x) is owned by root, so in order for these processes to stop/start/run they set setuser_match_path = /var/lib/ceph/$type/$cluster-$id in the ceph.conf under [osd].  This worked for the existing osd processes and they are running with no issues.  

The problem they are running into is that when ceph-disk (or ceph-ansible) runs the install of the OSD's it tries to install the processes with 'ceph' user and it fails with the below error. 

Error: journal specified but not allowed by osd backend

The reason for this error is because the setuser_match_path values is being used for existing osd's and it attempts to use it for the new osd creation as well. To work around this we moved the parameter to a [osd.x] section for each of the existing osd's.  Once the value was removed from the [osd] section then ceph-disk was able to add the osd without issue. 

Note: We had to add the [osd.x] sections with the parameters to the group_vars/all.yml under the ceph.conf override section so they did not get overwritten. 

We believe that this option should be used only when the /var/lib/ceph/osd/ceph-x directory exists rather than being used as a global parameter for old and new OSD's.  Maybe there should be a user_default = ceph type option that it takes when the directory does not exist.  

I did discuss with the customer that it's not supported to be running in this manner as we don't know any future unforseen issues that may arise having 2 different users running the daemons.  Their plan is to replace the OSD's as they fail and they will be using the ceph user with the new OSD.  If you know of any issues that already exist please let me know and I will forward that along. 


Version-Release number of selected component (if applicable):RHCS 3.x


How reproducible:every time


Steps to Reproduce:
1.Upgrade from 1.3.x to 2.x and do not perform the chown as part of the upgrade. 
2.set setuser_match_path = /var/lib/ceph/$type/$cluster-$id  under [osd] in the ceph.conf
3.Try to use ceph-disk or ceph-ansible to add a new osd. 

Actual results:Install fails with 
[ceph@ceph-osd1 osd]$ sudo ceph-disk --setuser ceph --setgroup ceph prepare --fs-type xfs /dev/vdb /dev/vda
ceph-disk: Error: journal specified but not allowed by osd backend
[ceph@ceph-osd1 osd]$ 

Expected results:Below is when the parameter is commented out in the ceph.conf, but without having it specified for each [osd.x] section the osd's will fail to restart. 

[ceph@ceph-osd1 lib]$ sudo ceph-disk --setuser ceph --setgroup ceph prepare --fs-type xfs /dev/vdb /dev/vda
prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
The operation has completed successfully.
The operation has completed successfully.
meta-data=/dev/vdb1              isize=2048   agcount=4, agsize=655295 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2621179, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
[ceph@ceph-osd1 lib]$ 


Additional info:As a note, they were using a 'ceph' user to login and run the scripts using sudo.  I do not believe this affected anything in this situation, but wanted to be sure to note it.

Comment 5 Loic Dachary 2017-09-26 08:58:25 UTC
Moving to RADOS : the error is reported by ceph-disk but comes from ceph-osd


Note You need to log in before you can comment on or make changes to this bug.