Description of problem: The default value for Ceph Storage (OSD)(node) -> Osd mount options xfs is incorrect and results in the ceph-disk command running with incorrect command parameters. The provided value for "Osd mount options xfs" is "-o inode64,noatime,logbsize=256k" The provided value should just be the options, "inode64,noatime,logbsize=256k". The ceph-disk command appends this value to the -o command line argument before executing the shell command. See Ceph documentation for an example: http://ceph.com/docs/master/rados/configuration/osd-config-ref/#file-system-settings Version-Release number of selected component (if applicable): foreman-1.6.0.49-6.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. Deploy staypuft with default arguments for Ceph Storage (OSD)(node) 2. Configure hostgrops for controller, compute and OSD 3. Deploy cluster 4. Deploy RH Ceph 1.2.3 5. Create Ceph cluster 6. Zap ceph disks. 7. Attempt to create a Ceph OSD Actual results: OSD creation failes when the ceph-disk command fails when it attempts to test mount the OSD device. The command similar to "/usr/bin/mount -t xfs -o -oinode64,noatime,logbsize=256k -- /dev/sdg1 /var/lib/ceph/tmp/mnt.vqnRBV" is executed. Note the double "-o" operands. The second is coming from the "OSD mount options xfs" Expected results: Successful creation of an OSD. Additional info: The work around is to edit the deployment and update the parameter.
*** This bug has been marked as a duplicate of bug 1212580 ***