Description of problem: https://bugzilla.redhat.com/show_bug.cgi?id=1244328 was about all overcloud nodes having the same initiatorname post-deployment. This is resolved for compute nodes, but all other nodes (controller, ceph, swift, cinder) still have the same initiatornames. This is not a problem in a vanilla setup the way we use it, but will pose a problem if additional drives are added to these hosts via iscsi later on. Version-Release number of selected component (if applicable): [stack@instack ~]$ rpm -qa |grep tripleo openstack-tripleo-image-elements-0.9.6-10.el7ost.noarch openstack-tripleo-common-0.0.1.dev6-6.git49b57eb.el7ost.noarch openstack-tripleo-heat-templates-0.8.6-119.el7ost.noarch openstack-tripleo-puppet-elements-0.0.1-5.el7ost.noarch openstack-tripleo-0.0.7-0.1.1664e566.el7ost.noarch How reproducible: Always [stack@instack ~]$ nova list +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | fe8c1d14-d63b-442e-91b8-a6c68f1214ce | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=192.0.2.7 | | 44344478-0866-40f4-9a27-a9fa84343119 | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.0.2.11 | | 69335882-c373-4956-9823-47d95cd1ed4b | overcloud-compute-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 | | 3c691e1a-6a7e-48f1-bb28-fc2b7e953d15 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.12 | | abd350f8-54e2-420f-9e3e-d7f4081ed51c | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.10 | | 58483250-538b-4bdb-861b-f7cc98f7d08d | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.8 | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ [stack@instack ~]$ for i in `nova list|grep ctlplane|cut -d"=" -f2 |cut -d' ' -f1`; do echo $i; ssh heat-admin@$i cat /etc/iscsi/initiatorname.iscsi; done 192.0.2.7 InitiatorName=iqn.1994-05.com.redhat:9d4e9e8d8fe 192.0.2.11 InitiatorName=iqn.1994-05.com.redhat:8950acdea36 192.0.2.9 InitiatorName=iqn.1994-05.com.redhat:7c3107a5d62 192.0.2.12 InitiatorName=iqn.1994-05.com.redhat:9d4e9e8d8fe 192.0.2.10 InitiatorName=iqn.1994-05.com.redhat:9d4e9e8d8fe 192.0.2.8 InitiatorName=iqn.1994-05.com.redhat:9d4e9e8d8fe
I'm moving this off the 7.3 blockers because, as the report describes, it's not a problem a standard deployment and, in the event that users want to add drives via iscsi, we have a workaround.
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
Dan - is this still an issue? Should we consider if for OSP-13?
This was addressed in Pike by defining a new "Iscsid" TripleO service that is bound to all nodes (undercloud and overcloud) that use iSCSI. Some of the relevant patches include: https://review.openstack.org/482170 https://review.openstack.org/462538 I don't know how to clean up this BZ without running afoul of the bot, but I'm thinking the target could be set to OSP-12 and mark this CLOSED CURRENTRELEASE.
Close loop process, added a simple test case to cover this. Created a new test case: https://polarion.engineering.redhat.com/polarion/redirect/project/RHELOpenStackPlatform/workitem?id=RHELOSP-50663 Once automated I'll also "+" automate_bug flag.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days