RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1105954 - Fail to start guest while disable the default security labeling
Summary: Fail to start guest while disable the default security labeling
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.6
Hardware: x86_64
OS: All
medium
medium
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 1102612 (view as bug list)
Depends On: 1105939
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-06-09 02:09 UTC by zhenfeng wang
Modified: 2014-10-14 04:22 UTC (History)
7 users (show)

Fixed In Version: libvirt-0.10.2-38.el6
Doc Type: Bug Fix
Doc Text:
Clone Of: 1105939
Environment:
Last Closed: 2014-10-14 04:22:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
The log about libvirtd (64.11 KB, text/plain)
2014-06-12 08:18 UTC, zhenfeng wang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1102612 0 medium CLOSED The running guest will disappear while change the security_driver from "none" to "selinux" 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2014:1374 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2014-10-14 08:11:54 UTC

Internal Links: 1102612

Description zhenfeng wang 2014-06-09 02:09:26 UTC
+++ This bug was initially created as a clone of Bug #1105939 +++

Description of problem:
Fail to start guest while set security_default_confined = 0 in qemu.conf
and add <seclabel type='none' model='dac'/> in guest's xml

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-1.5.3-60.el7ev_0.2.x86_64
libvirt-1.1.1-29.el7.x86_64
kernel-3.10.0-123.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Disable the default security labeling in /etc/libvirt/qemu.conf
 security_default_confined = 0
 #service libvirtd restart

2.Start a normal guest
#virsh start rhel7

3.After the guest start, check the guest's xml we could see the following content
was added automatically.
#virsh dumpxml rhel7
--
  <seclabel type='none' model='selinux'/>
--
4.Destroy the guest, then add the following content to the guest's xml
# virsh dumpxml rhel7 |grep dac
  <seclabel type='none' model='dac'/>

5.Start the guest, the guest will fail to start with unknow error
# virsh start rhel7
error: Failed to start domain rhel7
error: An error occurred, but the cause is unknown

6.Re-try the upper steps with rhel6.6, also fail to start the guest, however
got a different error, also got some info from the libvirt and qemu's log

# virsh start rhel7k
error: Failed to start domain rhel7k
error: internal error Unknown failure during hook execution

check the libvirtd log
2014-06-04 07:09:53.842+0000: 11599: error : virCommandHandshakeWait:2460 : internal error Unknown failure during hook execution

check the qemu log
2014-06-04 07:09:53.792+0000: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name rhel7k -S -M rhel6.4.0 -enable-kvm -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 995a1b49-0924-ea37-107b-d9531cb6f59a -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel7k.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/var/lib/libvirt/images/rhel7.img,if=none,id=drive-ide0-0-0,format=qcow2,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=22,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:38:0c:9f,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/rhel7k.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/rhel7k.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5900,addr=127.0.0.1,disable-ticketing,seamless-migration=on -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
libvirt:  error : An error occurred, but the cause is unknown
2014-06-04 07:09:53.843+0000: shutting down

Actual results:
Fail to start guest with unknown failure

Expected results:
If we allow start guest in this scenario, the guest should be start successfully, if not, it should give clear error

Comment 2 Ján Tomko 2014-06-09 15:50:00 UTC
Upstream patch posted:
https://www.redhat.com/archives/libvir-list/2014-June/msg00406.html

Comment 4 Ján Tomko 2014-06-10 08:23:32 UTC
Fixed upstream by:
commit f9bf63e673c11cd189748c29b6ea7d2cf19c8da7
Author:     Ján Tomko <jtomko>
AuthorDate: 2014-06-09 16:23:52 +0200
Commit:     Ján Tomko <jtomko>
CommitDate: 2014-06-10 10:18:24 +0200

    SELinux: don't fail silently when no label is present
    
    This fixes startup of a domain with:
    <seclabel type='none' model='dac'/>
    on a host with selinux and dac drivers and
    security_default_confined = 0
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1105939
    https://bugzilla.redhat.com/show_bug.cgi?id=1102611

git describe: v1.2.5-81-gf9bf63e

Comment 6 Ján Tomko 2014-06-11 08:13:08 UTC
*** Bug 1102612 has been marked as a duplicate of this bug. ***

Comment 8 zhenfeng wang 2014-06-12 07:58:38 UTC
Hi Jan
I found the libvirtd will crash while save the guest with the comment1 's configuration with the latest libvirtd packet, please help check, thanks

steps

1.Disable the default security labeling in /etc/libvirt/qemu.conf
 security_default_confined = 0
 #service libvirtd restart

2.Start a normal guest
#virsh start rhel6

3.After the guest start, check the guest's xml we could see the following content
was added automatically.
#virsh dumpxml rhel6
--
  <seclabel type='none' model='selinux'/>
--
4.Destroy the guest, then add the following content to the guest's xml
# virsh dumpxml rhel6 |grep dac
  <seclabel type='none' model='dac'/>

5.Start the guest, the guest will fail to start with unknow error
# virsh start rhel6
Domain rhel6 started

6.Save the guest, the libvirtd crash
# virsh save rhel6 /tmp/rh6.save
error: Failed to save domain rhel6 to /tmp/rh6.save
error: End of file while reading data: Input/output error
error: One or more references were leaked after disconnect from the hypervisor
error: Failed to reconnect to the hypervisor

# ps aux|grep libvirtd

gdb) t a a bt

Thread 11 (Thread 0x7fffec0f0700 (LWP 6275)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0c236 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0c803 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7fffecaf1700 (LWP 6274)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0c236 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0c803 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7fffed4f2700 (LWP 6273)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0c236 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0c803 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7fffedef3700 (LWP 6272)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0c236 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0c803 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
---Type <return> to continue, or q <return> to quit---
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7fffee8f4700 (LWP 6271)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0c236 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0c803 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7fffef2f5700 (LWP 6270)):
#0  0x0000003d14681451 in __strlen_sse2 () from /lib64/libc.so.6
#1  0x0000003d14681166 in strdup () from /lib64/libc.so.6
#2  0x00007ffff7a0f81f in virParseOwnershipIds (label=0x0, 
    uidPtr=0x7fffef2f479c, gidPtr=0x7fffef2f4798) at util/util.c:3431
#3  0x000000000044db3e in qemuOpenFile (driver=<value optimized out>, 
    vm=<value optimized out>, path=0x7fffe413acd0 "/tmp/rh6.save", oflags=577, 
    needUnlink=0x7fffef2f482e, bypassSecurityDriver=0x7fffef2f482f)
---Type <return> to continue, or q <return> to quit---
    at qemu/qemu_driver.c:2757
#4  0x000000000046e19c in qemuDomainSaveMemory (driver=0x7fffe40d59a0, 
    vm=0x7fffe41cfbc0, path=0x7fffe413acd0 "/tmp/rh6.save", 
    domXML=<value optimized out>, compressed=0, 
    was_running=<value optimized out>, flags=0, asyncJob=QEMU_ASYNC_JOB_SAVE)
    at qemu/qemu_driver.c:2949
#5  0x000000000046e92f in qemuDomainSaveInternal (driver=0x7fffe40d59a0, 
    dom=0x7fffe413aba0, vm=0x7fffe41cfbc0, 
    path=0x7fffe413acd0 "/tmp/rh6.save", compressed=0, xmlin=0x0, flags=0)
    at qemu/qemu_driver.c:3095
#6  0x000000000046eece in qemuDomainSaveFlags (dom=0x7fffe413aba0, 
    path=0x7fffe413acd0 "/tmp/rh6.save", dxml=0x0, flags=0)
    at qemu/qemu_driver.c:3204
#7  0x00007ffff7ab4085 in virDomainSave (domain=0x7fffe413aba0, 
    to=0x7fffe413b3b0 "/tmp/rh6.save") at libvirt.c:2590
#8  0x000000000043ce56 in remoteDispatchDomainSave (
    server=<value optimized out>, client=<value optimized out>, 
    msg=<value optimized out>, rerr=0x7fffef2f4b80, 
    args=<value optimized out>, ret=<value optimized out>)
    at remote_dispatch.h:4630
#9  remoteDispatchDomainSaveHelper (server=<value optimized out>, 
    client=<value optimized out>, msg=<value optimized out>, 
    rerr=0x7fffef2f4b80, args=<value optimized out>, ret=<value optimized out>)
---Type <return> to continue, or q <return> to quit---
    at remote_dispatch.h:4608
#10 0x00007ffff7aeaa62 in virNetServerProgramDispatchCall (prog=0x7a2a70, 
    server=0x799ff0, client=0x79eb00, msg=0x79f640)
    at rpc/virnetserverprogram.c:431
#11 virNetServerProgramDispatch (prog=0x7a2a70, server=0x799ff0, 
    client=0x79eb00, msg=0x79f640) at rpc/virnetserverprogram.c:304
#12 0x00007ffff7aebd4e in virNetServerProcessMsg (srv=<value optimized out>, 
    client=0x79eb00, prog=<value optimized out>, msg=0x79f640)
    at rpc/virnetserver.c:170
#13 0x00007ffff7aec3ec in virNetServerHandleJob (
    jobOpaque=<value optimized out>, opaque=0x799ff0) at rpc/virnetserver.c:191
#14 0x00007ffff7a0c76c in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:144
#15 0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#16 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#17 0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7fffefcf6700 (LWP 6269)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0c236 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
---Type <return> to continue, or q <return> to quit---
#2  0x00007ffff7a0c803 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7ffff06f7700 (LWP 6268)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0c236 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0c803 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7ffff10f8700 (LWP 6267)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0c236 in virCondWait (c=<value optimized out>, 
---Type <return> to continue, or q <return> to quit---
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0c803 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7ffff1af9700 (LWP 6266)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0c236 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0c803 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0c059 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7ffff798b860 (LWP 6265)):
#0  0x0000003d146df343 in poll () from /lib64/libc.so.6
#1  0x00007ffff79f9bac in virEventPollRunOnce () at util/event_poll.c:615
---Type <return> to continue, or q <return> to quit---
#2  0x00007ffff79f8de7 in virEventRunDefaultImpl () at util/event.c:247
#3  0x00007ffff7aeb58d in virNetServerRun (srv=0x799ff0)
    at rpc/virnetserver.c:748
#4  0x0000000000424237 in main (argc=<value optimized out>, 
    argv=<value optimized out>) at libvirtd.c:1229

Comment 9 zhenfeng wang 2014-06-12 08:18:33 UTC
Created attachment 908005 [details]
The log about libvirtd

Comment 10 Ján Tomko 2014-06-12 08:46:30 UTC
I can reproduce the crash even with libvirt-0.10.2-36 (with security_driver = "none"), the fix for this bug just made it possible with driver = "selinux" too. I think that should be a separate bug.

Also, could you please check if a guest started with driver = "none" is still visible after changing it to "selinux" and restarting libvirtd (from the duplicate bug 1102612).

Comment 11 zhenfeng wang 2014-06-13 05:42:46 UTC
In fact, I can also hit the crash even with security_driver = "selinux", have file separate bug about that issue
https://bugzilla.redhat.com/show_bug.cgi?id=1108593
https://bugzilla.redhat.com/show_bug.cgi?id=1108590

The running guest was in still visible after changing it from driver = "none" to driver = "selinux"

Comment 12 zhenfeng wang 2014-06-26 03:27:55 UTC
The bug 1108590 has been fixed, re-verify this bug with the comment0 and comment 8, also with the steps in bug 1102612, all of them can get the expect result, so mark this bug verified.

Comment 13 zhenfeng wang 2014-06-26 11:05:39 UTC
Hi Jan
I found a new issue that the shutoff guest will disappear if i restart the libvirtd service while the guest'xml contain <seclabel type='static' model='none'  relabel='yes'/>, please help check whether it's the same issue with this bug, thanks

pkginfo
kernel-2.6.32-466.el6.x86_64
libvirt-0.10.2-39.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.426.el6.x86_64

steps
1.Prepare a shutoff guest
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     rhel6                          shut off

2.Edit the guest, add the following content to the guest's xml
#virsh edit rhel6
--
<seclabel type='static' model='none'  relabel='yes'/>
--

#virsh dumpxml rhel6
  <seclabel type='static' relabel='yes'/>

3.Check the guest status
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     rhel6                          shut off

4.Restart the libvirtd service
#service libvirtd restart

5.Re-check the guest status, the guest has disappeared

# virsh list --all
 Id    Name                           State
----------------------------------------------------

# 

6.The issue always happens no matter i set the security_driver='selinux' or security_driver='none' in qemu.conf

Comment 14 Ján Tomko 2014-06-26 14:02:25 UTC
The issue in comment 1 was caused by libvirt's selinux driver quietly failing on missing labels.
This issue in comment 13 is caused by libvirt accepting the static label of model none (which doesn't really make sense). Then it formats it without the model (because it's "none") and fails to parse it back. I'd say this is a different bug (in the XML parser), more similar to bug 1027096 than this one.

Comment 15 zhenfeng wang 2014-06-27 00:44:29 UTC
Thanks for Jan's quick reply, then will we fix this issue in bug 1027096 or file a separate one for it

Comment 16 Ján Tomko 2014-06-27 04:55:56 UTC
I think a separate one would be better.

Comment 17 zhenfeng wang 2014-06-30 01:24:41 UTC
thanks, file new bug about that issue, since also hit that issue in rhel7, so clone 1 to rhel7
https://bugzilla.redhat.com/show_bug.cgi?id=1113860
https://bugzilla.redhat.com/show_bug.cgi?id=1113861

Comment 19 errata-xmlrpc 2014-10-14 04:22:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1374.html


Note You need to log in before you can comment on or make changes to this bug.