- Description of problem: Recently I had deployed RHS on PXE and I failed to identify the product by looking at /etc/issue as it was still saying ' Red Hat Enterprise Linux Server release 6.2 (Santiago) ). After an investigation I came to know there is one more ks.cfg included in RHS ISO /pxeboot/initrd ( sorry, I am from RHEL background ) which got some of the cosmetic fix ( /etc/issue modification ) as well as configuration fixes ( for instance, disable selinux ) in %post . I am pasting following for references, -- snippet from ks.cfg located under RHS ISO - RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso path - images/pxeboot/initrd.img -- %post --nochroot # bz 798597 sed -i -e '/^Red Hat Enterprise Linux/c Red Hat Storage release 2.0 for On-Premise' /mnt/sysimage/etc/issue /mnt/sysimage/etc/issue.net %end %post # for some reason selinux --disabled in the ks doesn't work? huh? sed -i -e 's/\(^SELINUX=\).*$/\1disabled/' /etc/selinux/config # bz 885574 chkconfig iscsid off #/usr/sbin/fix-grub.sh %end -- /snippet -- - Honestly I never thought we will include Bugzilla fixes in ks.cfg ( At least, the intention of ks.cfg is not to include official bug fixes I think ) - So ks.cfg inside initrd won't execute/call whenever I install RHS installation via PXE/kickstart but ks.cfg will execute if I install RHS from ISO. Unless we don't have an official statement, we can't insist/expect customer to use RHS ISO in a huge deployment . - If the %post of ks.cfg is not executed, it will break the deployment sure !. For instance, selinux will be in enforce mode if I install from PXE, but that should be disabled in RHS environment. Version-Release number of selected component (if applicable): - For this analysis I have referred - RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso . How reproducible: - Always, Actual results: - ks.cfg located under /RHS-ISO/images/pxeboot/initrd.img are not executed . So some of the bug fixes are skipped silently, which will break the deployment. Expected results: - Bz fixes should be included in RHS packages . Workaround: Modify PXE kickstart file based on ks.cfg included in RHS ISO. Btw, honestly I don't believe its practically possible to extract ISO in every deployment, extract initrd, copy %post of ks.cfg, modify kickstart file used for PXE installation. Thanks, Dominic
(In reply to comment #0) > - Description of problem: > Recently I had deployed RHS on PXE ... I believe this problem hits beaker too, in that a copy of the ks is used in beaker, but any changes have to be noticed and manually updated. > - Honestly I never thought we will include Bugzilla fixes in ks.cfg ( At > least, the intention of ks.cfg is not to include official bug fixes I think ) The reason bug fixes are going in the kickstart rather than being made to the default configuration in rpms is mainly so that we can keep using plain RHEL rpms rather than having to track every update and patch our copies of their rpms. > Expected results: > - Bz fixes should be included in RHS packages . As far as I know this isn't feasible for the fixes included in the ks. > Workaround: Modify PXE kickstart file based on ks.cfg included in RHS ISO. > Btw, honestly I don't believe its practically possible to extract ISO in > every deployment, extract initrd, copy %post of ks.cfg, modify kickstart > file used for PXE installation. It ought to be possible to include a sample kickstart for PXE folks either on the iso or in rhn; but I'd like some indication of what's actually useful for admins/customers on that score.
(In reply to comment #2) > (In reply to comment #0) > > - Description of problem: > > Recently I had deployed RHS on PXE ... > > I believe this problem hits beaker too, in that a copy of the ks is used in > beaker, but any changes have to be noticed and manually updated. > Manually updating it in beaker may work for us in Red Hat, but we are eventually preparing this for use by the Enterprise Customer, and you cannot expect an Enterprise Customer to be required to do this manual intervention on a large datacenter, every single time we come up with a new ISO version. > > - Honestly I never thought we will include Bugzilla fixes in ks.cfg ( At > > least, the intention of ks.cfg is not to include official bug fixes I think ) > > The reason bug fixes are going in the kickstart rather than being made to > the default configuration in rpms is mainly so that we can keep using plain > RHEL rpms rather than having to track every update and patch our copies of > their rpms. > > > Expected results: > > - Bz fixes should be included in RHS packages . > > As far as I know this isn't feasible for the fixes included in the ks. Having fixes for BZs included in this fashion defeats the whole purpose, since the fixes will not be available for Enterprise Customers who do not install/update directly by ISO. You have to expect that Customers with a large RHS server count may have the content of the ISO shared over NFS/HTTP/FTP/PXE and the servers installed using these resources. The RHS servers in this instance will not have the fixes given out through the initrd file on ISO, and would definitely break the set-up. This will cause a lot of frustration among those Enterprise Customers who use the network method to have their RHS servers installed. > > > Workaround: Modify PXE kickstart file based on ks.cfg included in RHS ISO. > > Btw, honestly I don't believe its practically possible to extract ISO in > > every deployment, extract initrd, copy %post of ks.cfg, modify kickstart > > file used for PXE installation. > > It ought to be possible to include a sample kickstart for PXE folks either > on the iso or in rhn; but I'd like some indication of what's actually useful > for admins/customers on that score. If the Customer is expected to do the installation of RHS a particular way only, this should be prominently documented. But we do mention PXE booting in RHS Installation Guide - https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Storage/2.0/html/Installation_Guide/sect-Installation_Guide-Install_RHS-boot_frm_pxe_1.html In my honest opinion, we need to find a better way to get this done, because putting a lot of restrictions on the Customer, like insisting on a manual ISO installation for every RHS server, or repeated manual interventions on the PXE server, is certainly not the right way to progress, and could backfire on us eventually.
For adding a "New Server" into a "Server Cluster" in RHS-C, VDSM bootstrap script is currently having a dependency on the RPM "qemu-kvm-tools". --- # rpm -qa |grep qemu-kvm-tools qemu-kvm-tools-0.12.1.2-2.209.el6_2.5.x86_64 --- If this particular package is missing in the RHS node, the bootstrapping itself will fail and as a result, the host will not get added into the cluster at all. So further testing of the product is not possible. Following event message in the Console confirms the same: --- 2013-Jan-22, 17:05 Failed to install Host rhs-client30.lab.eng.blr.redhat.com. Yum Cannot queue package qemu-kvm-tools: No package(s) available to install. 2013-Jan-22, 17:05 Failed to install Host rhs-client30.lab.eng.blr.redhat.com. Failed to execute stage 'Package installation': No package(s) available to install. --- Now the exact issue: After I deployed RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso from PXE and tried to add the Host to a cluster, it was found failing always. Whereas, my deployments of the same RHS version from the ISO was getting added successfully! On analysing the logs and events, I was able to figure out that it was due to a missing package, "qemu-kvm-tools". But the root cause of this missing RPM, ONLY during the PXE deployments was still unknown to me! On further debugging, I saw that this package (+ a few more...) is specified in the %packages section of the kickstart file and were skipped during PXE installations. Following is the relevant section of the ks.cfg file located inside RHS ISO (RHS-2.0-20130115.0-RHS-x86_64-DVD1.iso) ---- %packages --nobase @core @rhs-tools glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma glusterfs-geo-replication @glusterfs-swift @scalable-file-systems xfsprogs xfsdump # bz 798027 mdadm # bz 798304 tuned tuned-utils # u4 SDL mesa-libGLU qemu-kvm-tools %end ---- And as already reported by Dominic in this bug, ks.cfg inside initrd won't execute/call whenever the RHS installation is done via PXE/Beaker. So, you can't expect these packages to be present after the PXE installation! This is a serious issue for the QE team as we do most of our RHS deployments for our daily testing using PXE/beaker and we never think of any such issues. If things break like this in between, we will have to invest more time in debugging to find its root cause and then install the missing packages or make modifications as a temporary work-around. Moreover, this can also result in a fault assumption in the QA Engineer that the issue lies in some of the component he is testing, whereas the actual issue lies in the ISO build itself! This issue is asking me to avoid any further RHS deployments via PXE/beaker for my testing until a permanent fix is made and rather go with the ISO installation.
Created attachment 686674 [details] Kickstart file
The issue reported in Comment 4 gives a glimpse of the damage that the current approach can cause, and this will keep on escalating if we do not change our approach of using a kickstart file in the ISO to achieve mandatory requirements for the product. The issue should NOT be seen as only applicable for the QE team or any other team in Red Hat, but the focus of this BZ has to be ON the Enterprise Customer, for whom the product is being developed. During the development of a product, work-around procedures may be acceptable, of which the current approach is an example of. However the Enterprise Customer expects a final polished product, where all the fixes and dependencies should be either part of the product, or resolved smoothly during the installation, whatever the method of installation may be. I personally know that Enterprise Customers more often than not prefer Network methods of installation for servers, and that would expose the inadequacy of the current approach, and make the Red Hat Storage product look very amateurish indeed.
Per Feb-12 RHEV-RHS tiger team meeting, targeting for 2.0.z U4.
The internal trees now include a ksappend.cfg file: http://download.devel.redhat.com/composes/nightly/RHS-2.1-20130515.n.0/2/RHS/x86_64/os/kickstarts/ksappend.cfg This will be used by beaker for RHS composes after the next update. If you're setting up PXE installs independently of beaker, specifying: %include .../os/kickstarts/ksappend.cfg at the end of your kickstart should work.
The 'ksappend.cfg' file only fixes the issue internally for Red Hat. Bear in mind that we do support RHS installation over PXE. So how does this fix the issue for the customer who installs RHS over PXE ? How would the 'ksappend.cfg' be available at the customer's PXE server, and how will it be ensured that the PXE installation at the customer site will always use it ? Please do not treat this as a Red Hat internal only issue. We support PXE installation for RHS, and so this issue can be considered to be fixed, only if it is so from the customer perspective. Moving BZ back to 'ASSIGNED' state, for the above stated reasons.
Hi Team, Can I get an update on this bz ?. If my understanding is correct, this bz fix will be included in coming release - Big Bend - , which is supposed to be out in another one-two months I think. However, bz state is still assigned !. Btw, are we going to support RHS Big Bend release over pxe ?. Thanks,
The redhat-storage-server packages includes the changes from the kickstart as part of its %post.
Just putting a quick summary here. As per BZ #852266 glusterfs does not work with selinux set to enforcing. In 2.0+ selinux was getting disabled as part of the default kickstart included with the install ISO: %post --nochroot # bz 798597 sed -i -e '/^Red Hat Enterprise Linux/c Red Hat Storage release 2.0 for On-Premise' /mnt/sysimage/etc/issue /mnt/sysimage/etc/issue.net %end %post sed -i -e 's/\(^SELINUX=\).*$/\1disabled/' /etc/selinux/config # bz 885574 chkconfig iscsid off # bz 920103 chkconfig tuned on In 2.1 this has been moved to the post section of the spec file for the redhat-storage-server package: %post chkconfig gluster-system-settings on chkconfig iscsi off chkconfig iscsid off chkconfig tuned on chkconfig ip6tables off sed -i -e 's/\(^SELINUX=\).*$/\1disabled/' /etc/selinux/config sed -i -e 's/Red Hat Storage.* release .*/Red Hat Storage Server release %{major_version}/' /etc/issue /etc/issue.net
Thanks for your update. In platform bz I usually see an explanation like comment#12 ( thanks @ben ) and fixed package name/version instead single line reply.I thought RHS is part of RH. Btw,such technical explanation can use for future reference - knowledge base article - and the same will create a great impression when our support deals with customers, which is important !. Thanks for your understanding.
Are there bugs available to show why SELinux needs to be disabled? Could we do it in permissive mode and collect the AVC's so that we can get it to enforcing mode eventually. It really looks bad when a Red Hat package requires us to disable SELinux.
(In reply to Daniel Walsh from comment #14) > Are there bugs available to show why SELinux needs to be disabled? Could we > do it in permissive mode and collect the AVC's so that we can get it to > enforcing mode eventually. It really looks bad when a Red Hat package > requires us to disable SELinux. It is upto the RHS BU to make the policy decision regarding mode of SELinux on RHS servers, and currently it is to keep SELinux in disabled state for now. So we cannot change it for the redhat-storage server package. We could have SELinux set to permissive mode in a test environment for us to collect the AVCs. - rejy (rmc)
The kickstart file (ks.cfg) in the latest RHS ISO - RHS-2.1-20130806.n.2-RHS-x86_64-DVD1.iso - has the following content : ------------------------------------------------------ authconfig --enableshadow --passalgo=sha512 bootloader --location=mbr firewall --disabled install keyboard us lang en_US.UTF-8 logging --level=info selinux --disabled timezone --utc UTC zerombr services --enabled=ntpd %post %end ------------------------------------------------------ The introduction of the redhat-storage-server package has ensured, that most of the changes from the kickstart file that existed at the time of this BZ being opened, has been successfully moved to the new package. However, a few issues still remain, which have been captured in separate BZs: 1) Bug 994472 - Difference in hash algorithm used for user password, on Red Hat Storage (RHS) systems installed from ISO, and from Red Hat (RH) Satellite server 2) Bug 994460 - On a Red Hat Storage (RHS) 2.1 server installed from Red Hat (RH) Satellite server, the installation does not ensure that the required ports are opened at the firewall 3) Bug 920450 - 'ntp' and 'ntpdate' packages and 'ntpd' service not available on a Red Hat Storage (RHS) server installed through Red Hat Satellite server Currently these three issues prevent this BZ from being qualified. So marking that this BZ depends on those 3 BZs being fixed, and moving this BZ back to ASSIGNED.
Red Hat Storage should not be turning off SELinux in its post install.
After some brainstorming I think I have something that may work here: 1. Remove the configuration changes from the %post section in the RPM. 2. Add a script to the RPM called rhs-configuration.sh(or whatever) that will handle all the configuration changes and prompt the user for confirmation on disabling selinux. 3. Have running the config script be the first thing we do in the install documentation. Here is an example of what I was thinking: #!/bin/bash echo "Running the Red Hat Storage configuration script. Answering NO to any of these prompts is not recommended and could be problematic(especially SELinux, Red Hat Storage is not supported with SELinux in this release)." # Enable/disable services echo "Enabling gluster system settings" chkconfig gluster-system-settings on; echo "OK" echo "Disabling iscsi initiator service" chkconfig iscsi off; echo "OK" echo "Disabling iscsid initiator service" chkconfig iscsid off; echo "OK" echo "Enabling tuned" chkconfig tuned on; echo "OK" echo "Disabling ip6tables" chkconfig ip6tables off; echo "OK" # Disable selinux. RHS NEEDS selinux disabled to work properly. echo "RHS Configuration wants to disable SELinux, please enter yes to accept this(Note: glusterfs does not work with SELinux enabled in this release, Leaving SELinux enabled will result in an unusable product)." echo "Please enter yes to disable selinux, no to exit" read text if [ $text = "yes" ]; then sed -i -e 's/\(^SELINUX=\).*$/\1disabled/' /etc/selinux/config; echo "SELinux has been successfully disabled" else echo "You have chosen not to disables selinux, as stated RHS will not operate properly with SELinux disabled. Exiting." exit 1 fi # update /etc/issue echo "Updating /etc/issue and /etc/issue.net" sed -i -e 's/Red Hat Storage.* release .*/Red Hat Storage Server release %{major_version}/' /etc/issue /etc/issue.net Is this an acceptable solution?
(In reply to Ben Turner from comment #20) > After some brainstorming I think I have something that may work here: > 1. Remove the configuration changes from the %post section in the RPM. This would be a significant change from the way installing the SSA 3.2 and RHS 2.0 isos worked. Not an objection here, but I'll defer to PM. > 2. Add a script to the RPM called rhs-configuration.sh(or whatever) that > will handle all the configuration changes and prompt the user for > confirmation on disabling selinux. There is already a /usr/lib/glusterfs/.unsupported/rhs-system-init.sh script; these changes could be included there I think, if it were supported. Cheers, aj
As per Rich and Sayan's bug triage removing blocker flag.
Rejy some of the bugs are made non-blocker based on whether it is a latent bug, or a trivial one which can be documented, or a RFE, or a duplicate. This bug has a Release Engineering impact of 1. So, will be a blocker. Others are made non-blocker based on falling into one of the above categories.
Can we please move the SELinux requests, conversations, and test results to BZ 852266 (RFE: Support for selinux in enforcing mode for RHS server), which is specifically for enabling SELinux support in RHS servers. Having more feedback at that BZ would help to increase the priority of the request, and enable faster achievement of objective. This BZ is only for ensuring that all configs and settings achieved through the ks.cfg file in the initrd of the RHS ISO, is replicated in network installs of RHS as well.
Thanks Rejy - posting the comment at the referenced BZ.
(In reply to Rejy M Cyriac from comment #16) > However, a few issues still remain, which have been captured in separate BZs: > 1) Bug 994472 > 2) Bug 994460 > 3) Bug 920450 These have been decided to be documented, or are already fixed. PM has not acked the suggested changes with respect to SELinux, so I don't believe they should be considered as part of this bug (they could be considered if refiled as a separate bug though). As such, putting this bz back onto ON_QA. > Currently these three issues prevent this BZ from being qualified. So > marking that this BZ depends on those 3 BZs being fixed, and moving this BZ > back to ASSIGNED.
Verified that the configs and settings achieved through the ks.cfg file in the initrd of the RHS ISO, is being replicated in network installs of RHS as well, except for those reported in BZ 994472 and BZ 994460, the resolution for which are to be documented.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html