Hide Forgot
openstack-cinder: OSP12 with ipv6 - failing to create a cinder volume: Value for option iscsi_ip_address is not valid Environment: openstack-tripleo-heat-templates-7.0.0-0.20170512193554.el7ost.noarch instack-undercloud-7.0.0-0.20170503001109.el7ost.noarch openstack-puppet-modules-10.0.0-0.20170315222135.0333c73.el7.1.noarch openstack-cinder-11.0.0-0.20170515040117.dc60ec4.el7ost.noarch python-cinderclient-2.0.1-0.20170320163530.d0790e3.el7.noarch puppet-cinder-11.1.0-0.20170508094535.02e29ba.el7ost.noarch python-cinder-11.0.0-0.20170515040117.dc60ec4.el7ost.noarch Steps to reproduce: 1. Deploy overcloud with: openstack overcloud deploy --templates \ --libvirt-type kvm \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \ -e /home/stack/virt/network/network-environment-v6.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation-v6.yaml \ -e /home/stack/virt/hostnames.yml \ -e /home/stack/virt/docker-osp12.yaml \ -e /home/stack/virt/debug.yaml \ -e /home/stack/virt/nodes_data.yaml \ --log-file overcloud_deployment_68.log (overcloud) [stack@undercloud-0 ~]$ cat virt/network/network-environment-v6.yaml --- # This template configures each role to use Vlans on a single nic for # each isolated network, but uses multiple nic's on each node: # # nic1 = pxe/management/ctlplane # nic2 = VLAN trunk for network isolation # nic3 = public/external access # # This template assumes use of network-isolation.yaml. # # FIXME: if/when we add functionality to heatclient to include heat # environment files we should think about using it here to automatically # include network-isolation.yaml. resource_registry: OS::TripleO::BlockStorage::Net::SoftwareConfig: three-nics-vlans/cinder-storage.yaml OS::TripleO::Compute::Net::SoftwareConfig: three-nics-vlans/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: three-nics-vlans/controller-v6.yaml OS::TripleO::ObjectStorage::Net::SoftwareConfig: three-nics-vlans/swift-storage.yaml OS::TripleO::CephStorage::Net::SoftwareConfig: three-nics-vlans/ceph-storage.yaml parameter_defaults: ExternalNetCidr: '2620:52:0:13b8::/64' ExternalAllocationPools: [{'start': '2620:52:0:13b8:5054:ff:fe3e:1', 'end': '2620:52:0:13b8:5054:ff:fe3e:aa'}] ExternalInterfaceDefaultRoute: 2620:52:0:13b8::fe ExternalNetworkVlanID: 10 InternalApiNetCidr: 'fd00:fd00:fd00:2000::/64' InternalApiAllocationPools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] StorageNetCidr: 'fd00:fd00:fd00:3000::/64' StorageAllocationPools: [{'start': 'fd00:fd00:fd00:3000::10', 'end': 'fd00:fd00:fd00:3000:ffff:ffff:ffff:fffe'}] StorageMgmtNetCidr: 'fd00:fd00:fd00:4000::/64' StorageMgmtAllocationPools: [{'start': 'fd00:fd00:fd00:4000::10', 'end': 'fd00:fd00:fd00:4000:ffff:ffff:ffff:fffe'}] # DnsServers: ["2620:52:0:13b8::fe"] DnsServers: ["10.35.28.1"] EC2MetadataIp: 192.168.24.1 ControlPlaneDefaultRoute: 192.168.24.1 NeutronExternalNetworkBridge: "" NeutronBridgeMappings: "datacentre:br-ex,tenant:br-isolated" NeutronNetworkVLANRanges: "tenant:1000:2000" NeutronNetworkType: vxlan NeutronTunnelTypes: vxlan try to create a cinder volume with: cinder create 1 Result: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+--------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------+------+-------------+----------+-------------+ | 7013a057-17aa-425d-810c-5d412397c34d | error | - | 1 | - | false | | +--------------------------------------+--------+------+------+-------------+----------+-------------+ (overcloud) [stack@undercloud-0 ~]$ cinder show 7013a057-17aa-425d-810c-5d412397c34d +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-06-14T18:41:42.000000 | | description | None | | encrypted | False | | id | 7013a057-17aa-425d-810c-5d412397c34d | | metadata | {} | | migration_status | None | | multiattach | False | | name | None | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 38aa1d7d939f47638698d30404e01409 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | error | | updated_at | 2017-06-14T18:41:43.000000 | | user_id | 4a4b392d7ea94ab39d1e865cbb4aa5fe | | volume_type | None | +--------------------------------+--------------------------------------+ volume.log on controller shows: 2017-06-14 18:53:23.733 237731 CRITICAL cinder [req-c9c37a9f-3dc7-4079-a224-2a710cae4839 - - - - -] ConfigFileValueError: Value for option iscsi_ip_address is not valid: [fd00:fd00:fd00:3000::14] is not a valid host address 2017-06-14 18:53:23.733 237731 ERROR cinder Traceback (most recent call last): 2017-06-14 18:53:23.733 237731 ERROR cinder File "/usr/bin/cinder-volume", line 10, in <module> 2017-06-14 18:53:23.733 237731 ERROR cinder sys.exit(main()) 2017-06-14 18:53:23.733 237731 ERROR cinder File "/usr/lib/python2.7/site-packages/cinder/cmd/volume.py", line 120, in main 2017-06-14 18:53:23.733 237731 ERROR cinder launcher.wait() 2017-06-14 18:53:23.733 237731 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 581, in wait 2017-06-14 18:53:23.733 237731 ERROR cinder self.conf.log_opt_values(LOG, logging.DEBUG) 2017-06-14 18:53:23.733 237731 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2796, in log_opt_values 2017-06-14 18:53:23.733 237731 ERROR cinder _sanitize(opt, getattr(group_attr, opt_name))) 2017-06-14 18:53:23.733 237731 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3274, in __getattr__ 2017-06-14 18:53:23.733 237731 ERROR cinder return self._conf._get(name, self._group) 2017-06-14 18:53:23.733 237731 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2838, in _get 2017-06-14 18:53:23.733 237731 ERROR cinder value = self._do_get(name, group, namespace) 2017-06-14 18:53:23.733 237731 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2881, in _do_get 2017-06-14 18:53:23.733 237731 ERROR cinder % (opt.name, str(ve))) 2017-06-14 18:53:23.733 237731 ERROR cinder ConfigFileValueError: Value for option iscsi_ip_address is not valid: [fd00:fd00:fd00:3000::14] is not a valid host address 2017-06-14 18:53:23.733 237731 ERROR cinder
iscsi_ip_address is parsed as an oslo.config IPAddress. It looks like it accepts "fd00:fd00:fd00:3000::14" but not "[fd00:fd00:fd00:3000::14]" as a valid IP address. Changing cinder.conf's iscsi_ip_address value to not have brackets should work around this.
Thanks Eric. Confirm that the following works: Editing /etc/cinder/cinder.conf and removing the brackets in line iscsi_ip_address=[ipv6address] to look like: iscsi_ip_address=ipv6address and executing "pcs resource restart openstack-cinder-volume" worked. (overcloud) [stack@undercloud-0 ~]$ cinder create 1 cinder list+------------------------------+--------------------------------------+ | Property | Value | +------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-06-14T20:11:20.000000 | | description | None | | encrypted | False | | id | bbf5d41b-caa6-478e-a615-5fd27154252b | | metadata | {} | | multiattach | False | | name | None | | os-vol-tenant-attr:tenant_id | c52812903ef24730aed253b2afed7379 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | a4c02a4e30dc4800a106789b64df7477 | | volume_type | None | +------------------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | bbf5d41b-caa6-478e-a615-5fd27154252b | available | - | 1 | - | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+
This seems to be a behavior change in Cinder during Pike. I'm submitting a patch upstream to revert back to the previous behavior so we don't break upgrades for configs that use brackets in the near term.
Just for completeness -- do we know that this same configuration used to work?
shouldn't the bug move to modified ? we can re-test while we see it ON_QA
(In reply to Omri Hochman from comment #6) > shouldn't the bug move to modified ? There isn't a build yet that contains the fix.
Verified on: openstack-cinder-11.0.0-0.20170710183119.4689591.el7ost.noarch On an IPv6 deployment, 1 controller 1 compute Cinder create works, a volume is successfully created. $ cinder list +--------------------------------------+-----------+------------------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------------------------+------+-------------+----------+-------------+ | 1b03a088-1a84-4dae-b258-9b3a2a13c832 | available | Vol_on_IPv6_deployment | 1 | - | false | | +--------------------------------------+-----------+------------------------+------+-------------+----------+-------------+ Adding that I still see the brackets, but it works OK. [heat-admin@controller-0 ~]$ sudo grep iscsi_ip_address /etc/cinder/cinder.conf iscsi_ip_address=[fd00:fd00:fd00:3000::17]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3462