Hide Forgot
Description of problem: The controllers nodes get configured as Swift storage nodes and they store object replicas. I would expect the objects to end up only on the objectstorage nodes and not on the controller ones. This might prove to be risky as controllers could end up out of disk space because they are not sized for storing the Swift objects. Version-Release number of selected component (if applicable): openstack-tripleo-heat-templates-0.8.7-2.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. Deploy overcloud openstack overcloud deploy --templates ~/templates/my-overcloud \ --control-scale 1 --compute-scale 1 --swift-storage-scale 3 \ -e ~/templates/my-overcloud/environments/network-isolation.yaml \ -e ~/templates/network-environment.yaml 2. source overcloudrc; swift upload some_container instackenv.json Actual results: [root@overcloud-controller-0 ~]# head -3 /srv/node/d1/objects/744/e08/ba31df04398749ee6dba4595030ece08/1452280297.52024.data { "ssh-user": "root", "ssh-key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAthiHE1j3i/XySBHpAlb1ipWcAFmoed9hVK+kUCiBhvVG2NW1\nl7JiZDYqpQEzkDzFuYsHgxKIPiApSynIUQhpxNjGStPdsnplyfybAQm7bRP9uMXG\nhTcJI64cAp8XC0KuB7IEMxKcPjcK/8XhSHuchxn7tdNxcJiUHg28zQY8jpEc The objects are replicated on the storage nodes and on the controller as well: [root@overcloud-controller-0 swift]# swift-ring-builder account.builder account.builder, build version 4 1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 The overload factor is 0.00% (0.000000) Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0 1 1 172.16.19.14 6002 172.16.19.14 6002 d1 100.00 768 0.00 1 1 1 172.16.19.12 6002 172.16.19.12 6002 d1 100.00 768 0.00 2 1 1 172.16.19.13 6002 172.16.19.13 6002 d1 100.00 768 0.00 3 1 1 172.16.19.11 6002 172.16.19.11 6002 d1 100.00 768 0.00 Expected results: The object gets replicated only on the objectorage nodes.
I found that you can disable this behavior by the ControllerEnableSwiftStorage parameter which is true by default. I believe we should switch it to false as the default value.
After switch ControllerEnableSwiftStorage to false the objects didn't get written to the controller anymore but the swift ring still contained references to the controller: [root@overcloud-controller-0 ~]# sudo swift-ring-builder /etc/swift/object.ring.gz Note: using /etc/swift/object.builder instead of /etc/swift/object.ring.gz as builder file /etc/swift/object.builder, build version 4 1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 The overload factor is 0.00% (0.000000) Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0 1 1 172.16.19.14 6000 172.16.19.14 6000 d1 100.00 768 0.00 1 1 1 172.16.19.12 6000 172.16.19.12 6000 d1 100.00 768 0.00 2 1 1 172.16.19.13 6000 172.16.19.13 6000 d1 100.00 768 0.00 3 1 1 172.16.19.11 6000 172.16.19.11 6000 d1 100.00 768 0.00
Just curious, but will I still be able to have the Controller nodes store objects in Swift in the OSP8 GA by simply making sure that ControllerEnableSwiftStorage set to true? I just want to be able to make sure that I can still do this, as I need this ability. Thank you!
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
This is mostly about placement of Swift config. Can we make sure that only nodes containing Swift backend gets configured accordingly and not other nodes?