Description of problem: Deployment with 3 controllers, 1 compute node, 3 ceph nodes. The ceph nodes have 2 OSD disks: source ~/stackrc export THT=/usr/share/openstack-tripleo-heat-templates openstack overcloud deploy --templates \ -e $THT/environments/network-isolation.yaml \ -e ~/templates/network-environment.yaml \ -e $THT/environments/storage-environment.yaml \ -e ~/templates/disk-layout.yaml \ --control-scale 3 \ --control-flavor controller \ --compute-scale 1 \ --compute-flavor compute \ --ceph-storage-scale 3 \ --ceph-storage-flavor ceph \ --ntp-server clock.redhat.com cat templates/disk-layout.yaml parameter_defaults: ExtraConfig: ceph::profile::params::osds: '/dev/vdb': {} '/dev/vdc': {} Version-Release number of selected component (if applicable): openstack-tripleo-heat-templates-5.0.0-0.20160817161003.bacc2c6.1.el7ost.noarch rhosp-director-images-10.0-20160819.1.el7ost.noarch puppet-ceph-2.0.0-0.20160813061329.aa78806.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. Deploy overcloud Actual results: Deployment fails: Error: semanage fcontext -a -t ceph_var_lib_t '/dev/vdb(/.*)?' && restorecon -R /dev/vdb returned 255 instead of one of [0] Error: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/vdb]/Exec[fcontext_/dev/vdb]/returns: change from notrun to 0 failed: semanage fcontext -a -t ceph_var_lib_t '/dev/vdb(/.*)?' && restorecon -R /dev/vdb returned 255 instead of one of [0] Warning: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/vdb]/Exec[ceph-osd-activate-/dev/vdb]: Skipping because of failed dependencies Error: semanage fcontext -a -t ceph_var_lib_t '/dev/vdc(/.*)?' && restorecon -R /dev/vdc returned 255 instead of one of [0] Error: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/vdc]/Exec[fcontext_/dev/vdc]/returns: change from notrun to 0 failed: semanage fcontext -a -t ceph_var_lib_t '/dev/vdc(/.*)?' && restorecon -R /dev/vdc returned 255 instead of one of [0] Warning: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/vdc]/Exec[ceph-osd-activate-/dev/vdc]: Skipping because of failed dependencies Expected results: Deployment succeeds. Additional info: Manually running the command on the ceph node returns 'Permission denied': [root@overcloud-cephstorage-0 ~]# semanage fcontext -a -t ceph_var_lib_t '/dev/vdc(/.*)?' && restorecon -R /dev/vdc restorecon set context /dev/vdc->system_u:object_r:ceph_var_lib_t:s0 failed:'Permission denied'
retorecong command ran successfully after switching SELinux to permissive. [root@overcloud-cephstorage-0 ~]# grep denied /var/log/audit/audit.log type=AVC msg=audit(1472071302.829:107): avc: denied { associate } for pid=18832 comm="restorecon" name="vdb" dev="devtmpfs" ino=9858 scontext=system_u:object_r:ceph_var_lib_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=filesystem type=AVC msg=audit(1472071304.262:109): avc: denied { associate } for pid=18847 comm="restorecon" name="vdc" dev="devtmpfs" ino=9863 scontext=system_u:object_r:ceph_var_lib_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=filesystem type=AVC msg=audit(1472116924.777:242): avc: denied { associate } for pid=23938 comm="restorecon" name="vdc" dev="devtmpfs" ino=9863 scontext=system_u:object_r:ceph_var_lib_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=filesystem type=AVC msg=audit(1472116996.360:281): avc: denied { associate } for pid=23999 comm="restorecon" name="vdc" dev="devtmpfs" ino=9863 scontext=system_u:object_r:ceph_var_lib_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=filesystem type=AVC msg=audit(1472117183.814:361): avc: denied { associate } for pid=24115 comm="restorecon" name="vdc" dev="devtmpfs" ino=9863 scontext=system_u:object_r:ceph_var_lib_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=filesystem type=AVC msg=audit(1472117307.715:364): avc: denied { associate } for pid=24139 comm="restorecon" name="vdc" dev="devtmpfs" ino=9863 scontext=system_u:object_r:ceph_var_lib_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=filesystem
upstream patch merged
verified on puppet-ceph-2.2.0-1.el7ost.noarch
With the configuration CephStorageExtraConfig: ceph::profile::params::osd_max_object_name_len: 256 ceph::profile::params::osd_max_object_namespace_len: 64 ceph::profile::params::osd_pool_default_pg_num: 256 ceph::profile::params::osd_pool_default_pgp_num: 256 ceph::profile::params::osds: '/dev/vda': {} '/dev/vdb': {} '/dev/vdc': {} '/dev/vdd': {} '/dev/vde': {} '/dev/vdf': {} '/dev/vdg': {} '/dev/vdh': {} '/dev/vdi': {} '/dev/vdj': {}
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-2948.html