Yes I believe that is correct. Per my understanding, the way it works right now, is that: 1. If cinder_backend_gluster is true, there will be a RHS backend for Cinder on remote servers. 2. If cinder_backend_iscsi is true, there will be an LVM Block Storage node that runs the cinder-volume service and exports a local cinder-volume VG via tgtd. Incidentally, the user must create the cinder-volume VG before they add the host to the storage group. The VG can be backed by iSCSI, a local disk, or a loopback device. The cinder_iscsi_iface is the IP address that shares the cinder-volume VG via tgtd. 3. If both cinder_backend_gluster and cinder_backend_iscsi are false, then the controller node (either nova or neutron) will run cinder-volume locally backed by a cinder-volume VG with the same caveats as above. (Must already exist, etc.) The tgtd will run from the controller node's private network interface. I patched it this way so 1) we have a default cinder-volume placement on the controller if no external backend is selected 2) it used existing parameters -- I did not want to add more 3) it provides a nice framework for adding future backends such as NFS, which I am working on now. From a documentation standpoint, I think it is important to explain the logic. (IE -- not selecting a backend will place cinder-volume on the controller shaed via tgtd) and also instructions for creating a cinder-volume VG for either the LVM Block Storage node or the controller. Incidentally, I did not test what would happen if both backend_iscsi and backend_gluster are set to true.