Description of problem:
Cinder not deleting iscsi targets for deleted volumes
Issue: Customer is seeing the iscsi database (/var/lib/iscsi/nodes) keeping a history of all connections ever made on that server and never gets cleaned up. Linux keeps retrying for those volumes to the storage backend, even though they have been deleted by cinder on storage as well as openstack.
Environment: Customer is using OpenStack 10 with Solidfire storage on the back end.
Current Status: Is/should cinder be cleaning up the iscsi database? Also, if it is currently was it during OSP10?
Some of the ERROR messages that we see in nova-compute logs resemble: https://bugzilla.redhat.com/show_bug.cgi?id=1599233
The issue is happening on all servers around the customer's 4 openstack environments having 100s of computes.
Targets can be cleared manually, but cinder seems to leave orphaned targets.
Version-Release number of selected component (if applicable):
OSP 10
$ grep os-brick installed-rpms
python-os-brick-1.6.2-4.el7ost.noarch Sat Mar 23 11:55:51 2019
$ grep cinder installed-rpms
openstack-cinder-9.1.4-41.el7ost.noarch Sat Mar 23 11:59:04 2019
puppet-cinder-9.5.0-6.el7ost.noarch Sat Mar 23 01:50:20 2019
python-cinderclient-1.9.0-6.el7ost.noarch Sat Mar 23 11:54:51 2019
python-cinder-9.1.4-41.el7ost.noarch Sat Mar 23 11:59:00 2019
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
After investigation performed during April Events.
1) Make sure all nodes are running the latest os-brick
2) Manually clean up stale connection info that possibly occurred back when running an older os-brick as instructed at customer portal ticket.
3) Monitor for any new occurrences of the problem.
Archiving this one as current release one.