Bug 1047653
Summary: | openstack-foreman: cinder.conf not populated on LVM Block Storage node | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Yogev Rabl <yrabl> | ||||||
Component: | openstack-foreman-installer | Assignee: | Jiri Stransky <jstransk> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Yogev Rabl <yrabl> | ||||||
Severity: | high | Docs Contact: | |||||||
Priority: | urgent | ||||||||
Version: | 4.0 | CC: | breeler, dron, jguiditt, jliberma, jstransk, mlopes, morazi, ramon, rhos-maint, sreichar, yeylon, yrabl | ||||||
Target Milestone: | z1 | Keywords: | Rebase, ZStream | ||||||
Target Release: | 4.0 | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | storage | ||||||||
Fixed In Version: | openstack-foreman-installer-1.0.3-1.el6ost | Doc Type: | Rebase: Bug Fixes Only | ||||||
Doc Text: |
Changes to Foreman after rebasing to version:
openstack-foreman-installer-1.0.3-1.el6ost
Highlights and important bug fixes:
Previously, deploying OpenStack Block Storage backed by LVM using "LVM Block Storage" host group did not work. This has been fixed and now works correctly.
Note that the hosts need to have a "cinder-volumes" LVM volume group already created before assigning them to "LVM Block Storage" host group, and the "cinder_backend_iscsi" variable should be set to "true" on that host group.
|
Story Points: | --- | ||||||
Clone Of: | |||||||||
: | 1056055 1056058 (view as bug list) | Environment: | |||||||
Last Closed: | 2014-01-23 14:23:09 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1040649, 1056055, 1056058 | ||||||||
Attachments: |
|
Description
Yogev Rabl
2014-01-01 15:39:44 UTC
@jistr, any insight on the above? I seem to remember you being involved in this area of the code, and there being a known manual step to create the VG, but maybe I am remembering wrong Missing config values --------------------- The lvm_cinder.pp [1] class has changed quite a bit after my last commit there [2] and i didn't notice that. It seems that the host group is not just for LVM storage, but it can also do Gluster mounting in case Gluster is enabled. I recall that controller used to do Gluster mounting before. I guess this change was needed to load-balance communication between compute nodes and Gluster, but we should verify with Gilles. So if i read the manifest correctly, now it behaves like this: If "$cinder_backend_iscsi = true" is set, then those missing LVM-related config entries should get created. And if "$cinder_backend_gluster = true" is set, then it mounts the Gluster storage. This makes me think maybe we should rename the host group to "Block Storage" instead of "LVM Block Storage", and rename also the puppet class that's backing the host group. Volume group creation --------------------- Volume group needs to be backed by physical volumes (partitions or disks), which means the user will have to think of this when doing partitioning on the node. (E.g. user can use the the Foreman feature for editing the part of kickstart file that's doing partitioning.) That means we cannot get into a state where user can just deploy the storage host group on any unprepared node (e.g. with all disk space already assigned for some use). We could make it that simple if the volume group would be backed by a loop file instead of real partition or disk, but that doesn't feel to me like a solution for production environment. We have a script for creating a volume group backed by a loop file [3], but that should be for testing only and it doesn't survive reboot. So if we want to improve this somehow, i think the farthest we can get is accept a parameter like: $lvm_cinder_devices => ['/dev/sda2', '/dev/sdb'] And then Puppet could do pvcreate and vgcreate (we'd need to add lvm module to packstack-modules-puppet). But even in that case user still has to do the partitioning manually, in cases where partitioning is needed. I don't think we can automate beyond that, i'd say users could have various disk setups on their block storage nodes. [1] https://github.com/redhat-openstack/astapor/blob/3f0e87f3da5429bb7d32059ef3f34ba5be979947/puppet/modules/quickstack/manifests/storage_backend/lvm_cinder.pp [2] https://github.com/redhat-openstack/astapor/blob/76930baaa67c20a7f007bdfd2e692846a5252cc1/puppet/modules/quickstack/manifests/storage_backend/lvm_cinder.pp [3] https://github.com/redhat-openstack/astapor/blob/3f0e87f3da5429bb7d32059ef3f34ba5be979947/bin/bridge-create.sh Jiri -- I will look at the effects of "$cinder_backend_iscsi = true" a bit more. Whether or not I had it set to "true", the appropriate entries were not created in /etc/cinder/cinder.conf and tgtd was never started or used on either the controller or block storage server. What I expect to happen is that if foreman detects a cinder-volumes VG, and this parameter is set to "true", then it will add appropriate entries to cinder.conf and share this VG via tgtd, and also add appropriate firewall rules. It should not matter whether cinder-volumes is backed by an iSCSI PV (except that it would need to be manually re-attached after a reboot) or a loopback device so long as it detects the VG. On that note, I do not think its necessary to create the VG automatically. It is sufficient to document this manual step for customers and give them a choice of using a physical partition, an iSCSI session, or a loopback device. Here is a cinder.conf diff between a working packstack deployment and a broken foreman deployment: > osapi_volume_listen=0.0.0.0 2a4,5 > glance_host=10.16.137.101 > auth_strategy=keystone 4c7 < verbose=True --- > verbose=False 14c17,20 < sql_connection=mysql://cinder:redhat.137.101/cinder --- > iscsi_ip_address=10.16.137.107 > iscsi_helper=tgtadm > volume_group=cinder-volumes > sql_connection=mysql://cinder:fca9535e15fe4d18.137.101/cinder 54a61,70 This is in addition to the fixes mentioned above. (tgtd and targets.conf) I set all of this up manually and restarted the cinder services. The everything worked. In my opinion, it should work as follows: 1. Both Neutron (controller) and LVM Block Storage have a cinder_backend_iscsi parameter 2. When set to "true", that server will run openstack-cinder-volume service, exporting the cinder-volumes VG through tgtd 3. the cinder-volumes VG should be set up by the user, not handled by Foreman. This allows loop back, iSCSI device, or local partition 4. Although it is possible to run cinder-volumes on many servers simultaneously, we would recommend that cinder_backend_iscsi by set to "false" on the controller if a Block Storage server is also used Created attachment 849703 [details]
packstack generated puppet manifests for working cinder config
Created attachment 849704 [details]
packstack generated manifests for working local LVM cinder server
I am a puppet novice, but it looks to me like iscsi.pp is not executed on the LVM Block Storage server. Looking at lvm_cinder.pp, I see a conditional for cinder_backend_iscsi to call iscsi.pp, but I don't see that this parameter is inherited from anywhere. The only place I see this parameter is in the neutron/controller.pp manifest. Anyway, I'd like to understand the relationship between the parameter and this manifest if it is not specifically inherited. Is it global? (In reply to jliberma from comment #8) > I am a puppet novice, but it looks to me like iscsi.pp is not executed on > the LVM Block Storage server. > > Looking at lvm_cinder.pp, I see a conditional for cinder_backend_iscsi to > call iscsi.pp, but I don't see that this parameter is inherited from > anywhere. > I started poking through these manifests yesterday and saw the same thing, those parameters look like they are not passed in. > The only place I see this parameter is in the neutron/controller.pp manifest. > Same here > Anyway, I'd like to understand the relationship between the parameter and > this manifest if it is not specifically inherited. Is it global? We don't have any global variables in foreman, so I think these must have gotten lost in some refactor. If this is the only problem, it will be an easy one to fix. I'm testing a fix right now. This patch did not work for me. lvm_storage.pp has cinder_backend_iscsi parameter included from quickstack::params. In params.pp it is set to false. However, it is set to true in /usr/share/openstack-foreman-installer/bin/seeds.rb and I verified the 'true' value was propagated successfully to the Neutron controller host group. So either I did not apply the patch in the correct place or it is pulling the parameter value form the wrong place. Here is my lvm_storage.pp. I applied the patch BEFORE I ran foreman_server.sh. [root@osp4-foreman puppet]# find /usr -name lvm_storage.pp -exec cat {} \; class quickstack::storage_backend::lvm_cinder( $cinder_backend_gluster = $quickstack::params::cinder_backend_gluster, $cinder_backend_iscsi = $quickstack::params::cinder_backend_iscsi, $cinder_db_password = $quickstack::params::cinder_db_password, $cinder_gluster_volume = $quickstack::params::cinder_gluster_volume, $cinder_gluster_peers = $quickstack::params::cinder_gluster_peers, $controller_priv_ip = $quickstack::params::controller_priv_ip, $cinder_iscsi_iface = 'em1', $mysql_host = $quickstack::params::mysql_host, $qpid_host = $quickstack::params::qpid_host, $verbose = $quickstack::params::verbose, ) inherits quickstack::params { class { 'cinder': rpc_backend => 'cinder.openstack.common.rpc.impl_qpid', qpid_hostname => $qpid_host, qpid_password => 'guest', sql_connection => "mysql://cinder:${cinder_db_password}@${mysql_host}/cinder", verbose => $verbose, } class { 'cinder::volume': } if str2bool($cinder_backend_gluster) { class { 'gluster::client': } if ($::selinux != "false") { selboolean{'virt_use_fusefs': value => on, persistent => true, } } class { 'cinder::volume::glusterfs': glusterfs_mount_point_base => '/var/lib/cinder/volumes', glusterfs_shares => suffix($cinder_gluster_peers, ":/${cinder_gluster_volume}") } } if str2bool($cinder_backend_iscsi) { class { 'cinder::volume::iscsi': iscsi_ip_address => getvar("ipaddress_${cinder_iscsi_iface}"), } firewall { '010 cinder iscsi': proto => 'tcp', dport => ['3260'], action => 'accept', } } } So I 1. changed cinder_backend_iscsi to true in /usr/share/openstack-foreman-installer/puppet/modules/quickstack/manifests/params.pp 2. re-ran puppet agent -tdv on the host in the LVM storage group 3. Add now tgtd is started and the appropriate include statement is in /etc/tgt/targets.conf 4. I created and attached a test volume successfully So here are the open questions 1. if I set cinder_backend_iscsi to true in seeds.rb, why does the neutron controller get it but this host group does not? 2. Should lvm_storage.pp reference params.pp if this file is not updated with the values set in seeds.rb? 3. How can we make sure this storage group gets the parameters set in seeds.rb instead of params.pp? I turns out I copied lvm_cinder.pp to lvm_storage.pp. When I name the manifest correctly everything works. Testing again for final measure but believe this is resolved. Thanks Jason and Jiri! Okay I deployed from scratch, applied the patch, then added a host to the LVM block storage hostgroup. Everything worked! Thanks, Jacob OK, this patch is merged and picked to stable branch, it will go into the next release. Jacob, Jay, thanks for verification and merging to stable :) there are other ways to do it: - Ask for an iscsi target parameter. - Connect to the target. - Configure it as a PV. - Create the cinder-volumes VG. Or, you can use a file created with the dd and loop it like the Grizzly manual instruct: http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/cinder-install.html (In reply to Yogev Rabl from comment #20) > there are other ways to do it: > - Ask for an iscsi target parameter. > - Connect to the target. > - Configure it as a PV. > - Create the cinder-volumes VG. > > Or, you can use a file created with the dd and loop it like the Grizzly > manual instruct: > http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/ > cinder-install.html In comment no. 3 i laid out some possibilities of volume creation and why i think they're not very good to go with. Jacob then responded in comment no. 5: "3. the cinder-volumes VG should be set up by the user, not handled by Foreman. This allows loop back, iSCSI device, or local partition" So i'm a bit confused now, i thought we didn't want to create the volume group automatically... Anyway, if we add the LVM module to puppet-modules-packstack, then we should be able to do the dd+loopfile solution. The iSCSI solution might be trickier because there doesn't seem to be a Puppet iSCSI module that would be maintained and used by a community of people. And about a possible physical device solution talked in comment no. 3. Yogev, nevertheless if we create the group manually, this BZ still contains a fix for the LVM Block Storage host group (and a plethora of comments regarding that fix), and that should still get verified and included in the release. So i'd vote for moving this BZ back to ON_QA, updating it's title to "cinder-volume service doesn't get configured" and if we really want to do the dd+loopfile solution using Puppet (note we already have a script for it [1]), then create a separate BZ for that. Is that ok? (I'm still not totally convinced that we want a Puppet-level solution for this, it seems like making Astapor more complicated for not too much benefit.) [1] https://github.com/redhat-openstack/astapor/blob/abe947fa2b0a0d34bab6a72ce02b83bed3083394/bin/cinder-testing-volume.sh I agree with Jiri's comments in 21, as well as Jacob's comment on 4: 'On that note, I do not think its necessary to create the VG automatically. It is sufficient to document this manual step for customers and give them a choice of using a physical partition, an iSCSI session, or a loopback device.' I'd recommend that we flip this back to ON_QA to confirm that it works as written and then write up additional RFEs for a future release if we really want to automate other parts of the installation process. Jiri, from what I understand the problems are: 1. We don't have the parameter of the iscsi target to connect to. 2. We don't know whether the user want to create a file and loop it, like the Grizzly manual show. Can we add: 1. 2 parameters: 'use local storage' with true/false values & a parameter of the size in GB. 2. a parameter of the iscsi target and use it as I've asked in comment 20. As discussed on irc, i've cloned this bug to capture requirements wrt volume group creation and its backends, and this BZ will be used to track fix of cinder.conf not being populated. verified following our conversation in the IRC: the changes that were made were changing the firewall rules and the tgt service. The Foreman doesn't create cinder-volumes VG at the moment. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2014-0046.html |