When the cinder-volume service runs in a container, some backend drivers require a "plugin" that provides external dependencies required by the driver. This plugin is essentially a customized version of the cinder-volume container image, and it's something that has to be supplied by the vendor as part of the OSP driver certification program. The Unity driver has external dependencies (storops, etc. ), and Dell EMC has been working on a Cinder plugin for the driver (see [1]). [1] https://access.redhat.com/containers/#/product/287917d11b90c3fe Is the deployment using this plugin?
Yes - we this plugin is use by deployment. According to the log connector is trigert by cinder.volume.manager which is not DELL EMC script: 2018-11-15 14:27:11.752 56 ERROR cinder.volume.manager [req-59ce0309-e3ca-4605-a04d-2dddf4f4e18d ce7579e57c8e427a91fc8ba37eccb564 aaefd262 9f0f41959b63ae59f2d0e1fe - default default] Driver initialize connection failed (error: Bad or unexpected response from the storage volume backend API: Host controller-0 doesn't have FC initiators.).: VolumeBackendAPIException: Bad or unexpected response from the storage volu me backend API: Host controller-0 doesn't have FC initiators. Output from this function how us that Linux is not able to send information about FC: 'connector': {u'initiator': u'iqn.1994-05.com.redhat:4229c52f101d', u'ip': u'controller-0', u'platform': u'x86_64', u'host': u'controller-0', u'do_local_attach': False, u'os_type': u'linux2', u'multipath': False}
(In reply to Dariusz from comment #2) > Yes - we this plugin is use by deployment. > > According to the log connector is trigert by cinder.volume.manager which is > not DELL EMC script: > > 2018-11-15 14:27:11.752 56 ERROR cinder.volume.manager > [req-59ce0309-e3ca-4605-a04d-2dddf4f4e18d ce7579e57c8e427a91fc8ba37eccb564 > aaefd262 > 9f0f41959b63ae59f2d0e1fe - default default] Driver initialize connection > failed (error: Bad or unexpected response from the storage volume > backend API: Host controller-0 doesn't have FC initiators.).: > VolumeBackendAPIException: Bad or unexpected response from the storage volu > me backend API: Host controller-0 doesn't have FC initiators. > > Output from this function how us that Linux is not able to send information > about FC: > > 'connector': {u'initiator': u'iqn.1994-05.com.redhat:4229c52f101d', u'ip': > u'controller-0', u'platform': > u'x86_64', u'host': u'controller-0', u'do_local_attach': False, u'os_type': > u'linux2', u'multipath': False} +------------------+---------------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+---------------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller-0 | nova | enabled | up | 2018-11-15T14:24:26.000000 | | cinder-volume | hostgroup@tripleo_dellemc_unity | nova | enabled | up | 2018-11-15T14:24:18.000000 | +------------------+---------------------------------+------+---------+-------+----------------------------+ From docker we can see FC using command: systool -c fc_host -v Checking PY code dell_emc/unity/utils.py def extract_fc_uids(connector): if 'wwnns' not in connector or 'wwpns' not in connector: msg = _("Host %s doesn't have FC initiators.") % connector['host'] Looks "connector" response is not able to get information from linux. THanks
I see an attachment in the case that shows the "systool -c fc_host -v" output when executing on the controller, and it looks good. But, we need to verify the data is the same when the command runs inside the cinder-volume container. Please run this on the controller: % docker exec -u root -ti <cinder volume container> systool -c fc_host -v Use "docker ps | grep cinder" to find the exact name of the cinder-volume container. You can post an attachment to the case, or add it to this BZ and mark it private.
I reviewed the data and confirmed the cinder-volume container is able to "see" the host's FC ports, which is good. What's still not clear is why the Unity backend driver finds the port data to be missing from the os-brick connector. This is probably something for Dell EMC to investigate.
(In reply to Alan Bishop from comment #6) > I reviewed the data and confirmed the cinder-volume container is able to > "see" the host's FC ports, which is good. What's still not clear is why the > Unity backend driver finds the port data to be missing from the os-brick > connector. This is probably something for Dell EMC to investigate. Ticket to involve DELL EMC has been create to investigate. Please keep in touch
After our investigation we realized that in Glance container below libs are missing: sysfsutils-2.1.0-16.el7.x86_64 and libsysfs-2.1.0-16.el7.x86_64 we prepared container with mentioned packages and cinder as a backend for glance with Unity works fine.
The root problem is it's os-brick's responsibility to provide the packages it needs. Eric has already posted a patch.
The fix was imported from upstream queens-rdo.
Eric, Just to be sure before taking a stab at this, Substituting EMC unity (which I don't have) with 3par FC shouldn't effect verification, as we're only talking about missing packages for OS-brick #13-14 which are back end agnostic. All I need to do would be: 1. Configure Cinder to use 3par FC. 2. Configure Glance to use Cinder as it's backend. 3. Successfully upload an image to Glance to warp this up, correct? If I missed anything let me know. Thanks
Verified on: python2-os-brick-2.3.4-2.el7ost.noarch Installed a system with Cinder over 3par FC backend. Added below file to overcloud_deploy.sh to enabled GlanceOverCinder (overcloud) [stack@puma52 ~]$ cat glanceovercinder.yaml --- parameter_defaults: GlanceBackend: cinder ExtraConfig: glance::config::api_config: glance_store/cinder_store_auth_address: value: "%{hiera('glance::api::authtoken::auth_url')}/v3" glance_store/cinder_store_user_name: value: glance glance_store/cinder_store_password: value: "%{hiera('glance::api::authtoken::password')}" glance_store/cinder_store_project_name: value: "%{hiera('glance::api::authtoken::project_name')}" Cinder uses 3par FC as backend: (overcloud) [stack@puma52 ~]$ cinder service-list +------------------+-------------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-------------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | controller-0 | nova | enabled | up | 2019-03-25T18:32:07.000000 | - | | cinder-volume | controller-0@3parfc | nova | enabled | up | 2019-03-25T18:32:08.000000 | - | | cinder-volume | hostgroup@tripleo_iscsi | nova | enabled | down | 2019-03-25T18:24:13.000000 | - | +------------------+-------------------------+------+---------+-------+----------------------------+-----------------+ Any image was successfully uploaded to Glance, over Cinder using a 3par FC backend: (overcloud) [stack@puma52 ~]$ glance image-create --disk-format qcow2 --container-format bare --name Cirros3par --file cirros-0.3.5-i386-disk.img +------------------+-----------------------------------------------+ | Property | Value | +------------------+-----------------------------------------------+ | checksum | 7316af7358dd32ca1956d72ac2c9e147 | | container_format | bare | | created_at | 2019-03-25T18:28:52Z | | direct_url | cinder://a3a1fce3-34b0-42b7-85a8-905d62568aa5 | | disk_format | qcow2 | | id | 0f7e0bc0-6bc2-42ce-ba05-80fa87835241 | | min_disk | 0 | | min_ram | 0 | | name | Cirros3par | | owner | dd3ededc1f484f8a8be1cf3834890fed | | protected | False | | size | 12528640 | | status | active | | tags | [] | | updated_at | 2019-03-25T18:29:40Z | | virtual_size | None | | visibility | shared | +------------------+-----------------------------------------------+ Why this failed on a previous attempt it failed, acting on a hunch I'd switched controller/compute roles on BMs, maybe that fixed the issue I'd hit before. Any way this now looks OK to verify.
According to our records, this should be resolved by python-os-brick-2.3.4-2.el7ost. This build is available now.
(In reply to Dariusz from comment #13) > After our investigation we realized that in Glance container below libs are > missing: > sysfsutils-2.1.0-16.el7.x86_64 and libsysfs-2.1.0-16.el7.x86_64 > > we prepared container with mentioned packages and cinder as a backend for > glance with Unity works fine. did mistaken that glance container need those libs ? it looks like we should add them to cinder-volume container, not glance
The packages are required by os-brick, which is used by several containerized services (cinder-backup, cinder-volume, nova-compute, glance-api). By making the sysfsutils (which pulls in libsysfs) a requirement for os-brick, we ensure the package is present in any container that uses os-brick.