Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1702842

Summary: Cinder not deleting iscsi targets for deleted volumes
Product: Red Hat OpenStack Reporter: Andreas Karis <akaris>
Component: python-os-brickAssignee: Pablo Caruana <pcaruana>
Status: CLOSED CURRENTRELEASE QA Contact: Tzach Shefi <tshefi>
Severity: high Docs Contact:
Priority: high    
Version: 10.0 (Newton)CC: abishop, apevec, dhill, geguileo, jschluet, lhh, lyarwood, pcaruana, tenobreg, tshefi
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: 10.0 (Newton)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-08 10:13:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Andreas Karis 2019-04-24 22:29:18 UTC
Description of problem:
Cinder not deleting iscsi targets for deleted volumes

Issue: Customer is seeing the iscsi database (/var/lib/iscsi/nodes) keeping a history of all connections ever made on that server and never gets cleaned up. Linux keeps retrying for those volumes to the storage backend, even though they have been deleted by cinder on storage as well as openstack.

Environment: Customer is using OpenStack 10 with Solidfire storage on the back end.

Current Status: Is/should cinder be cleaning up the iscsi database? Also, if it is currently was it during OSP10?

Some of the ERROR messages that we see in nova-compute logs resemble: https://bugzilla.redhat.com/show_bug.cgi?id=1599233

The issue is happening on all servers around the customer's 4 openstack environments having 100s of computes.

Targets can be cleared manually, but cinder seems to leave orphaned targets.

Version-Release number of selected component (if applicable):
OSP 10

$ grep os-brick installed-rpms 
python-os-brick-1.6.2-4.el7ost.noarch                       Sat Mar 23 11:55:51 2019
$ grep cinder installed-rpms 
openstack-cinder-9.1.4-41.el7ost.noarch                     Sat Mar 23 11:59:04 2019
puppet-cinder-9.5.0-6.el7ost.noarch                         Sat Mar 23 01:50:20 2019
python-cinderclient-1.9.0-6.el7ost.noarch                   Sat Mar 23 11:54:51 2019
python-cinder-9.1.4-41.el7ost.noarch                        Sat Mar 23 11:59:00 2019

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 18 Pablo Caruana 2019-07-08 10:13:30 UTC
After investigation performed during April Events.

1) Make sure all nodes are running the latest os-brick
2) Manually clean up stale connection info that possibly occurred back when running an older os-brick as instructed at customer portal ticket.
3) Monitor for any new occurrences of the problem.

Archiving this one as current release one.