Description of problem: vgchange -a y does not detect volume groups running rc.sysinit. Version-Release number of selected component (if applicable): Linux hvracer4.ltc.austin.ibm.com 2.6.18-8.1.1.lspp.73.el5 #1 SMP Tue Apr 10 12:04:18 EDT 2007 ppc64 ppc64 ppc64 GNU/Linux initscripts-8.45.14.EL-1 selinux-policy-mls-2.4.6-53.el5 How reproducible: Install a system in the LSPP evaluated configuration. On first boot, the vgchange -a y will fail. Steps to Reproduce: 1. Install a ppc64 LPAR with RHEL5 GA + LSPP packages using the LSPP ks script. 2. Observice the console during first boot. 3. The console will show "Setting up Logical Volume Management: No volume groups found" at some point (see below). Actual results: Logical volumes in fstab get mounted. But new logical volumes are not found: . . . Setting hostname hvracer4.ltc.austin.ibm.com: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/rootvg/rootlv /dev/rootvg/rootlv: clean, 4346/524288 files, 116769/524288 blocks [/sbin/fsck.ext3 (1) -- /boot] fsck.ext3 -a /dev/sda2 /boot: clean, 20/26104 files, 19276/104384 blocks [/sbin/fsck.ext3 (1) -- /home] fsck.ext3 -a /dev/rootvg/homelv /dev/rootvg/homelv: clean, 16/1048576 files, 67707/1048576 blocks [/sbin/fsck.ext3 (1) -- /tmp] fsck.ext3 -a /dev/rootvg/tmplv /dev/rootvg/tmplv: clean, 12/1048576 files, 67700/1048576 blocks [/sbin/fsck.ext3 (1) -- /usr] fsck.ext3 -a /dev/rootvg/usrlv /dev/rootvg/usrlv: clean, 44283/1048576 files, 328723/1048576 blocks [/sbin/fsck.ext3 (1) -- /var] fsck.ext3 -a /dev/rootvg/varlv /dev/rootvg/varlv: clean, 169/1048576 files, 74751/1048576 blocks [/sbin/fsck.ext3 (1) -- /var/log] fsck.ext3 -a /dev/rootvg/varloglv /dev/rootvg/varloglv: clean, 29/262144 files, 16851/262144 blocks . . . Expected results: Volume groups should be found: 7 logical volume(s) in volume group "rootvg" now active Additional info: Rebooting in permissive mode causes this to work correcly. Similar failure can also be reproduced from the command line: [root/sysadm_r/SystemHigh@hvracer4 etc]# id uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel) context=staff_u:sysadm_r:sysadm_t:SystemHigh [root/sysadm_r/SystemLow@hvracer4 etc]# getenforce Enforcing [root/sysadm_r/SystemHigh@hvracer4 etc]# run_init vgchange -a y Authenticating ealuser. Password: No volume groups found [root/sysadm_r/SystemHigh@hvracer4 etc]# setenforce 0 [root/sysadm_r/SystemHigh@hvracer4 etc]# run_init vgchange -a y Authenticating ealuser. Password: 7 logical volume(s) in volume group "rootvg" now active This issue has been seen on ppc64 and s390x. Unsure about others but suspect is is present on them as well.
Did you get any avc's (they may be in syslog)? Rebooting in permissive seems to say this is selinux-policy problem and not vgchange.
No, I don't see avc denials. I didn't originally find this one. But I'll give it a shot with enableaudit.pp. I agree this should probably be reassigned to policy.
I never noticed this but I see the same thing on x86_64. I'm able to reproduce it from the command line. The only avc I see is a granted. type=AVC msg=audit(1176318274.564:11791): avc: granted { setexec } for pid=17781 comm="run_init" scontext=staff_u:sysadm_r:run_init_t:s0-s15:c0.c1023 tcontext=staff_u:sysadm_r:run_init_t:s0-s15:c0.c1023 tclass=process type=SYSCALL msg=audit(1176318274.564:11791): arch=c000003e syscall=1 success=yes exit=43 a0=3 a1=55555d19fab0 a2=2b a3=0 items=0 ppid=17746 pid=17781 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 comm="run_init" exe="/usr/sbin/run_init" subj=staff_u:sysadm_r:run_init_t:s0-s15:c0.c1023 key=(null) There's nothing in my messages file, but I'm up with auditd running so I wouldn't expect that.
Needs mls_file_read_up(lvm_t) Fixed in selinux-policy-2.4.6-56
I'm wondering whether we really need mls_file_read_up or whether we have a file that's at the wrong level.
I was curious about what was really going on so I did some digging. It looks like the command initially fails because the disk devices are at SystemHigh and we've been running the command at SystemLow, or SystemLow-SystemHigh. Doing that I see AVCs like this: type=AVC msg=audit(1176415513.415:10852): avc: denied { getattr } for pid=19198 comm="vgchange" name="sda" dev=tmpfs ino=948 scontext=staff_u:sysadm_r:lvm_t:s0-s15:c0.c1023 tcontext=system_u:object_r:fixed_disk_device_t:s15:c0.c1023 tclass=blk_file Was caused by: Constraint violation. Check policy/constraints. Typically, you just need to add a type attribute to the domain to satisfy the constraint. And the devices are indeed at SystemHigh. root/sysadm_r/SystemHigh@kipper home]# ls -lZ /dev/sda brw-r----- root disk system_u:object_r:fixed_disk_device_t:SystemHigh /dev/sda So then I tried the same thing after doing a newrole to SystemHigh. This time the command fails to run with this error: [root/sysadm_r/SystemHigh@kipper home]# vgchange -a y /var/lock/lvm/V_VolGroup01: open failed: Permission denied Can't lock VolGroup01: skipping When I look at the AVCs I believe its failing because the lock directory is SystemLow. type=AVC msg=audit(1176416210.035:11048): avc: denied { write } for pid=19265 comm="vgchange" name="lvm" dev=dm-0 ino=527214 scontext=staff_u:sysadm_r:lvm_t:s15:c0.c1023 tcontext=system_u:object_r:lvm_lock_t:s0 tclass=dir Was caused by: Constraint violation. Check policy/constraints. Typically, you just need to add a type attribute to the domain to satisfy the constraint. I went into permissive mode and ran it again and I believe there are other files that are at SystemLow too, like almost everything in /etc/lvm, including the directory. [root/sysadm_r/SystemHigh@kipper home]# ls -laZ /etc/lvm drwxr-xr-x root root system_u:object_r:lvm_etc_t:SystemLow . drwxr-xr-x root root system_u:object_r:etc_t:SystemLow .. drwx------ root root system_u:object_r:lvm_metadata_t:SystemLow archive drwx------ root root system_u:object_r:lvm_metadata_t:SystemLow backup -rw------- root root staff_u:object_r:lvm_metadata_t:SystemHigh .cache -rw-r--r-- root root system_u:object_r:lvm_etc_t:SystemLow lvm.conf Interesting that the .cache file is SystemHigh. So, if the disks are SystemHigh then should everything related to lvm also be SystemHigh?
I am not sure, but I think this becomes the classic problem of MLS in that slowly all files gravitate to SystemHigh. I think saying lvm tools are trusted applications is the best we can do.
While the better fix would likely be to modify the fcontexts rather than add an MLS override, the existing solution should suffice. The device itself is at the proper level and none of its contents are being written down. Rather than introduce potential instability, I'd vote to keep the current solution. I'd love to hear other opinions. In general, I think granting overrides should be considered carefully.
I agree with Dan and George. Has anyone verified the fix with the new policy? (We have no power at work right now so I can't try.)
Works as long as you aren't logged in at SystemHigh. Then you get the /var/lock/lvm/V_myvolgroup: open failed: Permission denied
Sounds like it needs to be able to write down as well.
Why would you want to run the tool at SystemHigh. Seems dangerous.
Running admin utilities at SystemLow needs to be documented.
A fix for this issue has been included in the packages contained in the beta (RHN channel) or most recent snapshot (partners.redhat.com) for RHEL5.1. Please verify that your issue is fixed. After you (Red Hat Partner) have verified that this issue has been addressed, please perform the following: 1) Change the *status* of this bug to VERIFIED. 2) Add *keyword* of PartnerVerified (leaving the existing keywords unmodified) If you cannot access bugzilla, please reply with a message to Issue Tracker and I will change the status for you. If this issue is not fixed, please add a comment describing the most recent symptoms of the problem you are having and change the status of the bug to ASSIGNED.
Verfied fixed on RHEL 5.1 Snap 2.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on the solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2007-0544.html