Bug 1649737 - VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Host controller-0 doesn't have FC initiators.
Summary: VolumeBackendAPIException: Bad or unexpected response from the storage volume...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-os-brick
Version: 13.0 (Queens)
Hardware: All
OS: All
urgent
urgent
Target Milestone: z6
: 13.0 (Queens)
Assignee: Eric Harney
QA Contact: Tzach Shefi
URL:
Whiteboard:
Depends On:
Blocks: 1697818
TreeView+ depends on / blocked
 
Reported: 2018-11-14 12:13 UTC by Nilesh
Modified: 2019-12-06 13:20 UTC (History)
12 users (show)

Fixed In Version: python-os-brick-2.3.4-2.el7ost
Doc Type: Bug Fix
Doc Text:
Cause: A python library that is used for managing Fibre Channel connections was missing from the glance-api container image. Consequence: When glance is configured to use cinder for its storage backend, attempts to create an image would fail when the corresponding cinder volume was accessed over Fibre Channel. Fix: The python requirements have been updated to ensure the library is included in all container images that utilize FC connections. Result: Glance is able to successfully store images on cinder volumes accessed over Fibre Channel.
Clone Of:
: 1697818 (view as bug list)
Environment:
Last Closed: 2019-03-26 10:35:37 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
RDO 18485 0 None None None 2019-01-23 19:37:06 UTC
RDO 18513 0 None None None 2019-03-12 14:24:55 UTC

Comment 1 Alan Bishop 2018-11-14 17:09:34 UTC
When the cinder-volume service runs in a container, some backend drivers
require a "plugin" that provides external dependencies required by the
driver. This plugin is essentially a customized version of the cinder-volume
container image, and it's something that has to be supplied by the vendor as
part of the OSP driver certification program.

The Unity driver has external dependencies (storops, etc. ), and Dell EMC has
been working on a Cinder plugin for the driver (see [1]).

[1] https://access.redhat.com/containers/#/product/287917d11b90c3fe

Is the deployment using this plugin?

Comment 2 Dariusz 2018-11-15 14:32:23 UTC
Yes - we this plugin is use by deployment.

According to the log connector is trigert by cinder.volume.manager which is not DELL EMC script:

2018-11-15 14:27:11.752 56 ERROR cinder.volume.manager [req-59ce0309-e3ca-4605-a04d-2dddf4f4e18d ce7579e57c8e427a91fc8ba37eccb564 aaefd262
9f0f41959b63ae59f2d0e1fe - default default] Driver initialize connection failed (error: Bad or unexpected response from the storage volume
 backend API: Host controller-0 doesn't have FC initiators.).: VolumeBackendAPIException: Bad or unexpected response from the storage volu
me backend API: Host controller-0 doesn't have FC initiators.

Output from this function how us that Linux is not able to send information about FC:

 'connector': {u'initiator': u'iqn.1994-05.com.redhat:4229c52f101d', u'ip': u'controller-0', u'platform':
 u'x86_64', u'host': u'controller-0', u'do_local_attach': False, u'os_type': u'linux2', u'multipath': False}

Comment 3 Dariusz 2018-11-15 14:53:20 UTC
(In reply to Dariusz from comment #2)
> Yes - we this plugin is use by deployment.
> 
> According to the log connector is trigert by cinder.volume.manager which is
> not DELL EMC script:
> 
> 2018-11-15 14:27:11.752 56 ERROR cinder.volume.manager
> [req-59ce0309-e3ca-4605-a04d-2dddf4f4e18d ce7579e57c8e427a91fc8ba37eccb564
> aaefd262
> 9f0f41959b63ae59f2d0e1fe - default default] Driver initialize connection
> failed (error: Bad or unexpected response from the storage volume
>  backend API: Host controller-0 doesn't have FC initiators.).:
> VolumeBackendAPIException: Bad or unexpected response from the storage volu
> me backend API: Host controller-0 doesn't have FC initiators.
> 
> Output from this function how us that Linux is not able to send information
> about FC:
> 
>  'connector': {u'initiator': u'iqn.1994-05.com.redhat:4229c52f101d', u'ip':
> u'controller-0', u'platform':
>  u'x86_64', u'host': u'controller-0', u'do_local_attach': False, u'os_type':
> u'linux2', u'multipath': False}


+------------------+---------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                            | Zone | Status  | State | Updated At                 |
+------------------+---------------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller-0                    | nova | enabled | up    | 2018-11-15T14:24:26.000000 |
| cinder-volume    | hostgroup@tripleo_dellemc_unity | nova | enabled | up    | 2018-11-15T14:24:18.000000 |
+------------------+---------------------------------+------+---------+-------+----------------------------+


From docker we can see FC using command: 
systool -c fc_host -v


Checking PY code dell_emc/unity/utils.py
def extract_fc_uids(connector):
    if 'wwnns' not in connector or 'wwpns' not in connector:
        msg = _("Host %s doesn't have FC initiators.") % connector['host']

Looks "connector" response is not able to get information from linux.

THanks

Comment 4 Alan Bishop 2018-11-15 18:06:18 UTC
I see an attachment in the case that shows the "systool -c fc_host -v" output when executing on the controller, and it looks good. But, we need to verify the data is the same when the command runs inside the cinder-volume container.

Please run this on the controller:

% docker exec -u root -ti <cinder volume container>  systool -c fc_host -v

Use "docker ps | grep cinder" to find the exact name of the cinder-volume container. You can post an attachment to the case, or add it to this BZ and mark it private.

Comment 6 Alan Bishop 2018-11-16 16:16:03 UTC
I reviewed the data and confirmed the cinder-volume container is able to "see" the host's FC ports, which is good. What's still not clear is why the Unity backend driver finds the port data to be missing from the os-brick connector. This is probably something for Dell EMC to investigate.

Comment 7 Dariusz 2018-11-17 19:16:32 UTC
(In reply to Alan Bishop from comment #6)
> I reviewed the data and confirmed the cinder-volume container is able to
> "see" the host's FC ports, which is good. What's still not clear is why the
> Unity backend driver finds the port data to be missing from the os-brick
> connector. This is probably something for Dell EMC to investigate.

Ticket to involve DELL EMC has been create to investigate.

Please keep in touch

Comment 13 Dariusz 2019-01-23 09:25:28 UTC
After our investigation we realized that in Glance container below libs are missing:
sysfsutils-2.1.0-16.el7.x86_64 and libsysfs-2.1.0-16.el7.x86_64 

we prepared container with mentioned packages and cinder as a backend for glance with Unity works fine.

Comment 14 Alan Bishop 2019-01-23 19:37:06 UTC
The root problem is it's os-brick's responsibility to provide the packages it needs. Eric has already posted a patch.

Comment 18 Alan Bishop 2019-03-12 14:24:56 UTC
The fix was imported from upstream queens-rdo.

Comment 20 Tzach Shefi 2019-03-13 09:02:21 UTC
Eric,
Just to be sure before taking a stab at this, 
Substituting EMC unity (which I don't have) with 3par FC shouldn't effect verification, 
as we're only talking about missing packages for OS-brick #13-14 which are back end agnostic.   

All I need to do would be:
1. Configure Cinder to use 3par FC. 
2. Configure Glance to use Cinder as it's backend. 
3. Successfully upload an image to Glance to warp this up, correct? 

If I missed anything let me know. 
Thanks

Comment 22 Tzach Shefi 2019-03-25 18:35:16 UTC
Verified on:
python2-os-brick-2.3.4-2.el7ost.noarch


Installed a system with Cinder over 3par FC backend. 

Added below file to overcloud_deploy.sh to enabled GlanceOverCinder

(overcloud) [stack@puma52 ~]$ cat glanceovercinder.yaml 
---
parameter_defaults:
  GlanceBackend: cinder
  ExtraConfig:
    glance::config::api_config:
      glance_store/cinder_store_auth_address:
        value: "%{hiera('glance::api::authtoken::auth_url')}/v3"
      glance_store/cinder_store_user_name:
        value: glance
      glance_store/cinder_store_password:
        value: "%{hiera('glance::api::authtoken::password')}"
      glance_store/cinder_store_project_name:
        value: "%{hiera('glance::api::authtoken::project_name')}"


Cinder uses 3par FC as backend:
(overcloud) [stack@puma52 ~]$ cinder service-list
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host                    | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller-0            | nova | enabled | up    | 2019-03-25T18:32:07.000000 | -               |
| cinder-volume    | controller-0@3parfc     | nova | enabled | up    | 2019-03-25T18:32:08.000000 | -               |
| cinder-volume    | hostgroup@tripleo_iscsi | nova | enabled | down  | 2019-03-25T18:24:13.000000 | -               |
+------------------+-------------------------+------+---------+-------+----------------------------+-----------------+

Any image was successfully uploaded to Glance, over Cinder using a 3par FC backend:

(overcloud) [stack@puma52 ~]$ glance image-create --disk-format qcow2 --container-format bare --name Cirros3par --file cirros-0.3.5-i386-disk.img
+------------------+-----------------------------------------------+
| Property         | Value                                         |
+------------------+-----------------------------------------------+
| checksum         | 7316af7358dd32ca1956d72ac2c9e147              |
| container_format | bare                                          |
| created_at       | 2019-03-25T18:28:52Z                          |
| direct_url       | cinder://a3a1fce3-34b0-42b7-85a8-905d62568aa5 |
| disk_format      | qcow2                                         |
| id               | 0f7e0bc0-6bc2-42ce-ba05-80fa87835241          |
| min_disk         | 0                                             |
| min_ram          | 0                                             |
| name             | Cirros3par                                    |
| owner            | dd3ededc1f484f8a8be1cf3834890fed              |
| protected        | False                                         |
| size             | 12528640                                      |
| status           | active                                        |
| tags             | []                                            |
| updated_at       | 2019-03-25T18:29:40Z                          |
| virtual_size     | None                                          |
| visibility       | shared                                        |
+------------------+-----------------------------------------------+


Why this failed on a previous attempt it failed, 
acting on a hunch I'd switched controller/compute roles on BMs, maybe that fixed the issue I'd hit before. 

Any way this now looks OK to verify.

Comment 24 Lon Hohberger 2019-03-26 10:35:37 UTC
According to our records, this should be resolved by python-os-brick-2.3.4-2.el7ost.  This build is available now.

Comment 25 Andrey Yurtaykin 2019-12-06 12:40:48 UTC
(In reply to Dariusz from comment #13)
> After our investigation we realized that in Glance container below libs are
> missing:
> sysfsutils-2.1.0-16.el7.x86_64 and libsysfs-2.1.0-16.el7.x86_64 
> 
> we prepared container with mentioned packages and cinder as a backend for
> glance with Unity works fine.

did mistaken that glance container need those libs ?
it looks like we should add them to cinder-volume container, not glance

Comment 26 Alan Bishop 2019-12-06 13:20:02 UTC
The packages are required by os-brick, which is used by several containerized services (cinder-backup, cinder-volume, nova-compute, glance-api). By making the sysfsutils (which pulls in libsysfs) a requirement for os-brick, we ensure the package is present in any container that uses os-brick.


Note You need to log in before you can comment on or make changes to this bug.