Just re-reading the comments, and giving my two cents... A small OpenStack deployment will typically run the cinder-volume service from the controller node. The cinder-volume service itself will be backed by an NFS server, a locally mounted iSCSI volume re-shared through tgtd, or even a loopback device as implemented in packstack as a proof of concept solution. Larger OpenStack deployments may run cinder-volume on a dedicated storage server (such as the LVM-storage host group in Foreman currently) or on the compute nodes themselves backed by a single storage namespace such as RHS or Ceph. Alternately, cinder-volume may run on the controller node but it merely redirects the API requests to the backend storage server. (as is the case with RHS) Right now Foreman ONLY allows a dedicated storage server running cinder-volume or RHS-backed Cinder running from the controller node. There is no solution typical to smaller OpenStack deployments where the controller runs cinder-volume backed by iSCSI, NFS, or a local loopback device. Conversely, packstack allows all of the options I have described above except perhaps running cinder-volume on the compute nodes directly which I have not tried. So the small-to-medium size Foreman user is left to either dedicate a storage server to Cinder-volumes OR buy RHS which will require additional servers. The RFE I want to see is the ability to run cinder-volume on the controller node backed by a locally mounted iSCSI target, an NFS server, or even a loopback device. It does not matter to me whether the user creates the cinder-volume VG manually or not. What is more important is the capability to run cinder-volume from the controller node. I think an elegant solution (IE -- simple) would be to modify the cinder_controller.pp module to run cinder-volume locally if cinder_backend-{iscsi,gluster} are both set to false. All we would nee is a cinder-volume VG on the controller node to share. (Ignoring NFS for the moment.) The cinder-volume VG could be backed by a loopback device or an iSCSI target or it could reside anywhere else. The puppet module should not care so long as it exists. Anyway, I will try to hack this together and submit it as a patch via astapor as a proof of concept.
Submitted pull request https://github.com/redhat-openstack/astapor/pull/117 Log: BZ #1056055 -- [RFE] create cinder-volumes VG backed by a loopback file https://bugzilla.redhat.com/show_bug.cgi?id=1056055 The default behavior most customers expect for small to medium installations is to run cinder-volume on the controller node. Packstack does this but Foreman only allows a dedicated storage backend backed by iSCSI or RHS. This patch deploys cinder-volume on the Nova or Neutron controller node if cinder_backend_{iscsi,gluster} are both set to false. It will detect a VG named cinder-volumes (whether backed by an iSCSI target or loopback device) and share it via tgtd. The user must create the VG, which is also the case with the cinder_backend_iscsi parameter for the LVM backend storage group. In a sense this patch replicates functionality customers expect from packstack. I tested it against the RHEL OSP 01.31.2014-1 puddle for both iSCSI and LVM bakced Cinder from the controller node.
Merged
Verifed, RHEL 6.5 openstack-foreman-installer-1.0.4-1.el6ost.noarch Foreman running on a VM Client host A, created VG called cinder-volumes (cinder-testing-volume.sh) Added client A to host group nova controller Ran foreman client puppet agent Created cinder volumes successfully via UI and CLI Checked VLS to verify volumes created under cinder-volumes VG
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-0213.html