Quality Engineering Management has reviewed and declined this request.
You may appeal this decision by reopening this request.
This got merged into Havana-2 upstream.
I agree the blueprint doesn't say much. The idea is to make a single cinder storage instance work in a cluster that uses cells.
Cells are basically just a scalability mechanism to combine a small number of API nodes with a large number of compute nodes.
See https://wiki.openstack.org/wiki/Blueprint-nova-compute-cells and http://docs.openstack.org/trunk/openstack-compute/admin/content/ch_cells.html for details on cells. Both contain the config file changes and nova-manage commands needed to set up cells.
So to test this we need at least 2 nodes. (More would be more realistic but I think 2 is sufficient for functionality testing.) One is the command node and runs nova-api, cinder, nova-cells, a DB (mysql or postgres), and an AMQP broker (qpid) (but *not* nova-compute or nova-scheduler). The other is the compute node and run nova-cells, nova-compute, nova-scheduler, nova-network, a DB, and an AMQP broker (but *not* nova-api or cinder).
I believe your existing storage test plan is fine, if run in this particular cells configuration. It's important that the compute cell has no direct connection to cinder, to prove that we're exercising this new code.
Yes, as I said in comment 8, I think that cinder storage sanity plan is fine, but we need to run it against the cells configuration rather than the normal configuration to exercise this blueprint.
test run passed with 97%
moving RFE to verified
These are the bug lists opened and reported during the plan:
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.