Bug 979193 - Storage operations are slow, long waits on OperationMutex
Storage operations are slow, long waits on OperationMutex
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
3.1.4
x86_64 Linux
urgent Severity urgent
: ---
: 3.3.0
Assigned To: Yeela Kaplan
Aharon Canan
storage
: ZStream
Depends On:
Blocks: 1001031
  Show dependency treegraph
 
Reported: 2013-06-27 17:37 EDT by Marina
Modified: 2016-02-10 12:37 EST (History)
22 users (show)

See Also:
Fixed In Version: is14
Doc Type: Bug Fix
Doc Text:
Previously storage operations (such as creating virtual machines from templates) were slow due to various validation checks and refreshes which were redundant. These checks have been removed, so storage performance has improved.
Story Points: ---
Clone Of:
: 1001031 (view as bug list)
Environment:
Last Closed: 2014-01-21 11:26:22 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
patch for lvm.py (1003 bytes, patch)
2013-07-02 18:54 EDT, Marina
no flags Details | Diff
the flow from Ruby to RHEV (9.17 KB, text/plain)
2013-07-10 00:32 EDT, Marina
no flags Details
ps aux output from spm host (102.30 KB, text/plain)
2013-07-12 17:14 EDT, Marina
no flags Details
commit b1a3d4fa4e33c55ba7613cec27ffd6aa98dc79d0 (2.62 KB, patch)
2013-07-18 09:58 EDT, Marina
no flags Details | Diff
commit e72a9ce379640e7ec3b134ef7453ee3e25f714d9 (6.34 KB, patch)
2013-07-18 10:02 EDT, Marina
no flags Details | Diff
summary of storage commands after apply patch3 to vdsm (4.35 MB, text/plain)
2013-08-15 20:12 EDT, Marina
no flags Details
processed output of a file from patch3 (217.90 KB, text/plain)
2013-08-15 20:14 EDT, Marina
no flags Details
vdsm.log after patch3 (948.84 KB, application/x-xz)
2013-08-15 20:15 EDT, Marina
no flags Details
all the patches provided up to 28 Aug (8.43 KB, application/x-xz)
2013-08-28 19:35 EDT, Marina
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 441203 None None None Never
oVirt gerrit 18274 None None None Never
oVirt gerrit 18275 None None None Never

  None (edit)
Description Marina 2013-06-27 17:37:52 EDT
A request to create 8 Vms from template (thin provision) takes in average 10 minutes to complete.
Looking at vdsm.logs we can see that the tasks are waiting on operational mutex too much - sometimes about 30 seconds.
Here is one example of one task:

vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:13,985::lvm::414::OperationMutex::(_reloadlvs) Got the operational mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:22,021::lvm::414::OperationMutex::(_reloadlvs) Got the operational mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:32,539::lvm::488::OperationMutex::(_invalidatevgs) Operation 'lvm reload operation' is holding the operation mutex, waiting...
>> 7 sec
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:39,662::lvm::488::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:39,667::lvm::490::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:39,668::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:39,670::lvm::510::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:40,803::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm reload operation' is holding the operation mutex, waiting...
>> 3 sec
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:43,829::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:43,831::lvm::510::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:43,837::lvm::414::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:45:58,291::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm reload operation' is holding the operation mutex, waiting...
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:04,299::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm reload operation' is holding the operation mutex, waiting...
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:08,735::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm reload operation' is holding the operation mutex, waiting...
>> 11 sec
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:11,953::lvm::498::OperationMutex::(_invalidatelvs) Got the operational mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:11,954::lvm::510::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:11,960::lvm::414::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:15,919::lvm::414::OperationMutex::(_reloadlvs) Got the operational mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:30,859::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm reload operation' is holding the operation mutex, waiting...
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:38,702::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm reload operation' is holding the operation mutex, waiting...
>> 13 sec
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:43,186::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:43,975::lvm::498::OperationMutex::(_invalidatelvs) Operation 'lvm reload operation' is holding the operation mutex, waiting...
>> 5 sec
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:48,255::lvm::498::OperationMutex::(_invalidatelvs) Got the operational mutex
vdsm.log.1.xz:8be2e436-22d8-4f6e-bfa9-4569e4a7c8b2::DEBUG::2013-06-26 14:46:48,259::lvm::510::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex


Additional Information:
Customer is using RHEV with EVM. EVM issues a request to spin VMs from a template via REST API.
Due to long time creating the VMs, it times out and fails the request.
Results: EVM is unusable. Sev1 case.
Increasing the time out on EVM side may help, but from customer words, with each additional request, it takes longer time to reply.

RHEV Setup:
Two datacenters in one RHEV-M setup: Default (FC), Lab (iSCSI).
Issuing similar request to iSCSI DC works 3 times faster. We would expect similar behaviour on FC DC.
We checked general performance of the host + storage -> seems fine (https://bugzilla.redhat.com/show_bug.cgi?id=970179#c7).
All actual storage commands are performed in a reasonably fast manner and seems like the bottlenck is the waiting on the mutex.

Another piece of information, per customer words, the issue started happening after upgrading his hypervisors to 20130501.0.el6_4 (not sure from which version, prolly 20130318).
I do not think this matters and the problem actually starts happening when he gets the system loaded.

Versions:
Hosts: 20130501.0.el6_4 (vdsm-4.10.2-1.13.el6ev)
RHEVM: rhevm-3.2.0-11.30.el6ev.noarch
Comment 2 Marina 2013-06-27 17:45:37 EDT
Created attachment 766351 [details]
Engine.log

Search for AddVmCommands comming from ajp
Comment 3 Marina 2013-06-27 17:47:00 EDT
Created attachment 766353 [details]
vdsm.log
Comment 4 Federico Simoncelli 2013-06-28 05:09:35 EDT
LVM operations are slow probably because one or more storage domains are unreachable:

959484ad-7caa-42eb-b78f-76e53d278817::DEBUG::2013-06-26 15:01:59,868::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '  /dev/sdba: read failed after 0 of 4096 at 0: Input/output error\n  /dev/sdba: read failed after 0 of 4096 at 2199023190016: Input/output error\n  /dev/sdba: read failed after 0 of 4096 at 2199023247360: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/sdba was disabled\n  /dev/sdw: read failed after 0 of 4096 at 0: Input/output error\n  /dev/sdw: read failed after 0 of 4096 at 2199023190016: Input/output error\n  /dev/sdw: read failed after 0 of 4096 at 2199023247360: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/sdw was disabled\n  /dev/sdbc: read failed after 0 of 4096 at 0: Input/output error\n  /dev/sdbc: read failed after 0 of 4096 at 2199023190016: Input/output error\n  /dev/sdbc: read failed after 0 of 4096 at 2199023247360: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/sdbc was disabled\n  /dev/sdbe: read failed after 0 of 4096 at 0: Input/output error\n  /dev/sdbe: read failed after 0 of 4096 at 2199023190016: Input/output error\n  /dev/sdbe: read failed after 0 of 4096 at 2199023247360: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/sdbe was disabled\n'; <rc> = 0
Comment 5 Marina 2013-06-28 10:08:27 EDT
(In reply to Federico Simoncelli from comment #4)
> LVM operations are slow probably because one or more storage domains are
> unreachable:
> 
From talking to the storage team, I understand that the devices in question are not the active device and that's what will happen if you scan them and they are actually unavailable.
For instance:
 
3600601605ab02c00620df4e048a0e211 dm-2 DGC,VRAID
size=2.0T features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 1:0:1:5 sdn  8:208  active ready running
| |- 1:0:0:5 sdg  8:96   active ready running
| |- 2:0:1:5 sdas 66:192 active ready running
| `- 2:0:0:5 sdam 66:96  active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
  |- 1:0:2:5 sdu  65:64  active ready running
  |- 1:0:3:5 sdab 65:176 active ready running
  |- 2:0:2:5 sday 67:32  active ready running
  `- 2:0:3:5 sdbe 67:128 active ready running  -> problematic

So, sdbe device should not be used, it is a passive path, and not active, and we should not try writing to it.
Comment 12 Marina 2013-07-02 12:05:30 EDT
After some testing this morning, we figured out that the filter created by vdsm should be modified to look only at the relevant pvs only, accessing them by the full path: /dev/mapper/wwid.

Compare three outputs:
[#1 with the specific filter]
# lvs -vvvv --config " devices { preferred_names = [\"^/dev/mapper/\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \"a|/dev/mapper/3600601605ab02c0028c71db4c076e211|/dev/mapper/3600601605ab02c005a0df4e048a0e211|/dev/mapper/3600601605ab02c005c0df4e048a0e211|/dev/mapper/3600601605ab02c005e0df4e048a0e211|/dev/mapper/3600601605ab02c00600df4e048a0e211|/dev/mapper/3600601605ab02c00620df4e048a0e211|/dev/mapper/3600601605ab02c00640df4e048a0e211|\", \"r|.*|\" ] } global { locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator '|'  -o uuid,name,attr,size,tags,vg_mda_size,vg_mda_free f48e93bb-e3bb-4291-bd2f-8d622ef8019d 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush() }' > /tmp/lvs_f48e93bb_3.log

~~~
START: 2013-07-02 15:34:45 #libdm-config.c:863       Setting activation/monitoring to 1
END:   2013-07-02 15:34:46 #libdm-config.c:799       Setting log/verbose to 0

1 Second.  
~~~

[#2 with the general filter to take all multipath devices]
# lvs -vvvv --config " devices { preferred_names = [\"^/dev/mapper/\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \"a|/dev/mapper/.*|\", \"r|.*|\" ] } global { locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator '|'  -o uuid,name,attr,size,tags,vg_mda_size,vg_mda_free f48e93bb-e3bb-4291-bd2f-8d622ef8019d 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush() }' > /tmp/lvs_f48e93bb.log

~~~
START: 2013-07-02 14:54:51 #libdm-config.c:863       Setting activation/monitoring to 1
END:   2013-07-02 14:55:02 #libdm-config.c:799       Setting log/verbose to 0

11 Seconds
~~~

[#3 The original filter was taking all the devices, without the full paths]
# vgs --config " devices { preferred_names = [\"^/dev/mapper/\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ 'a\%3600601605ab02c0028c71db4c076e211|3600601605ab02c005a0df4e048a0e211|3600601605ab02c005c0df4e048a0e211|3600601605ab02c005e0df4e048a0e211|3600601605ab02c00600df4e048a0e211|3600601605ab02c00620df4e048a0e211|3600601605ab02c00640df4e048a0e211|36848f690eba0850018d3cd8a15d6a65c%\', 'r\%.*%\' ] } global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator '|' -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free f48e93bb-e3bb-4291-bd2f-8d622ef8019d
Comment 13 Marina 2013-07-02 16:05:32 EDT
I tested the following on my setup and going to build a test package with this change now:
# diff lvm.py /var/tmp/lvm.py 
137c137
<             devs.append('/dev/mapper/' + strippedDev.replace(r'\x', r'\\x'))
---
>             devs.append(strippedDev.replace(r'\x', r'\\x'))
140c140
<         filt = "'a|" + filt + "|', "
---
>         filt = "'a%" + filt + "%', "
142c142
<     filt = "filter = [ " + filt + "'r|.*|' ]"
---
>     filt = "filter = [ " + filt + "'r%.*%' ]"
-----------------------------------------------
This results in:
filter = [ \'a|/dev/mapper/360014052e1060300051d006000000000|/dev/mapper/360014052e1060300051d016000000000|/dev/mapper/360014052e1060300051d017000000000|\', \'r|.*|\' ]
Comment 15 Marina 2013-07-02 18:54:13 EDT
Created attachment 767987 [details]
patch for lvm.py
Comment 21 Marina 2013-07-03 15:28:38 EDT
Unfortunately, the fixed filter didn't help.
It slightly improved the time, but it EVM requests are still failing on time out.

I am waiting for sos reports from the patched SPM to decide on next steps.
Comment 22 Marina 2013-07-03 18:02:37 EDT
I opened new bug for the filter only:
https://bugzilla.redhat.com/show_bug.cgi?id=981055

Here we should continue investigating customer original problem.
Comment 23 Marina 2013-07-10 00:30:43 EDT
After narrowing down the issue, we see that the problem happens when Ruby waits more then 30 seconds to get request from RHEV REST API on a single Create VM request and times out with ERROR.

Looking into RHEV side, for those timed out requests, we can see that most of the time is spent on GetImageInfoVDSCommand on the template we are trying to create VM from. (about two minutes for some requests).

Questions: 
1. Is the API flow correct and we must wait till we create the task for vdsm/storage to CreateSnapshot from the template (probably it is correct, otherwise how else we can acknowledge that the request was accepted by RHEV?)
2. Why do we need to issue GetImageInfoVDSCommand for the template before creating the task for CreateSnapshot?
3. Why does it take SPM host so long to reply to this command?

Additional information:
1. Logs excerpts for this flow are available on the attached file ajp-127.0.0.1:8702-10.flow
2. SPM is running the patch for narrowing the lvm filter as described earlier on this bug and moved to https://bugzilla.redhat.com/show_bug.cgi?id=981055.
3. This problem occurs only on 1 specific storage domain that has 8PVs in it and multiple LVs. Once issue the same request to create VM from template on another DC or another SD under this DC, on same RHEV setup, the problem does not occur --> this leads us to storage performance issue.
4. Earlier on the case, the storage performance was reviewed by our Storage Expert and by EMC representative, and no flaws were found.  --> what should be next steps here?

5. Customer decided to switch to VMWare due to this case not being resolved for a long time and we have a limited time frame to fix the issue and have the customer staying with RH.
Comment 24 Marina 2013-07-10 00:32:19 EDT
Created attachment 771381 [details]
the flow from Ruby to RHEV
Comment 26 Allon Mureinik 2013-07-10 10:48:55 EDT
(In reply to Marina from comment #23)
> Questions: 
> 1. Is the API flow correct and we must wait till we create the task for
> vdsm/storage to CreateSnapshot from the template (probably it is correct,
> otherwise how else we can acknowledge that the request was accepted by RHEV?)
Yes, that's correct.

> 2. Why do we need to issue GetImageInfoVDSCommand for the template before
> creating the task for CreateSnapshot?
We don't.
It was removed in commit 2575a22, part of RHEV-M 3.3 (is1)

> 3. Why does it take SPM host so long to reply to this command?
Yeela?
Comment 27 Allon Mureinik 2013-07-10 10:50:42 EDT
(In reply to Allon Mureinik from comment #26)
> (In reply to Marina from comment #23)
> > Questions: 
> > 1. Is the API flow correct and we must wait till we create the task for
> > vdsm/storage to CreateSnapshot from the template (probably it is correct,
> > otherwise how else we can acknowledge that the request was accepted by RHEV?)
> Yes, that's correct.
> 
> > 2. Why do we need to issue GetImageInfoVDSCommand for the template before
> > creating the task for CreateSnapshot?
> We don't.
> It was removed in commit 2575a22, part of RHEV-M 3.3 (is1)
This can probably easily be cherry-picked to 3.2.z.
Can't gaurentee it'll solve the issue, but it will definately increase performance.

> 
> > 3. Why does it take SPM host so long to reply to this command?
> Yeela?
Comment 28 Marina 2013-07-10 15:57:18 EDT
> > 
> > > 2. Why do we need to issue GetImageInfoVDSCommand for the template before
> > > creating the task for CreateSnapshot?
> > We don't.
> > It was removed in commit 2575a22, part of RHEV-M 3.3 (is1)
> This can probably easily be cherry-picked to 3.2.z.
> Can't gaurentee it'll solve the issue, but it will definately increase
> performance.
> 
Agreed.
And after testing it, we would be able to reconsider the environment again and find other, new factors.

/me is going to work on the test package for it.
Comment 29 Allon Mureinik 2013-07-11 09:44:50 EDT
(In reply to Allon Mureinik from comment #27)
> (In reply to Allon Mureinik from comment #26)
> > (In reply to Marina from comment #23)

> > > 2. Why do we need to issue GetImageInfoVDSCommand for the template before
> > > creating the task for CreateSnapshot?
> > We don't.
> > It was removed in commit 2575a22, part of RHEV-M 3.3 (is1)
> This can probably easily be cherry-picked to 3.2.z.
> Can't gaurentee it'll solve the issue, but it will definately increase
> performance.
OK, my mistake.
This was ALREADY cherry-picked for 3.2.1 (build sf18.2)

> 
> > 
> > > 3. Why does it take SPM host so long to reply to this command?
> > Yeela?
Comment 32 Marina 2013-07-12 17:14:07 EDT
Created attachment 772887 [details]
ps aux output from spm host
Comment 33 Yeela Kaplan 2013-07-14 04:16:14 EDT
Marina,

Please attach the logs for the new package you supplied.

We need logs that contain the flow to the original issue which is creation of vms from template (which is described in the first comment).

Especially we need the engine, vdsm and lvm logs.

Thanks!
Comment 38 Marina 2013-07-18 09:58:06 EDT
Created attachment 775333 [details]
commit b1a3d4fa4e33c55ba7613cec27ffd6aa98dc79d0
Comment 39 Marina 2013-07-18 10:02:37 EDT
Created attachment 775334 [details]
commit e72a9ce379640e7ec3b134ef7453ee3e25f714d9
Comment 54 Jay Turner 2013-08-01 07:59:40 EDT
Setting Target Milestone Beta1.
Comment 60 Marina 2013-08-15 20:12:57 EDT
Created attachment 787116 [details]
summary of storage commands after apply patch3 to vdsm
Comment 61 Marina 2013-08-15 20:14:39 EDT
Created attachment 787117 [details]
processed output of a file from patch3
Comment 62 Marina 2013-08-15 20:15:49 EDT
Created attachment 787118 [details]
vdsm.log after patch3
Comment 68 Yeela Kaplan 2013-08-26 04:08:08 EDT
Marina, I'm not sure this has anything to do with the new patches. 
Can you maybe attach the full vdsm log for this issue?
Thanks.
Comment 76 Marina 2013-08-28 19:35:25 EDT
Created attachment 791549 [details]
all the patches provided up to 28 Aug
Comment 79 Aharon Canan 2013-10-02 09:21:00 EDT
verified using is17, following comment #5 on bug #1001031
Comment 80 Charlie 2013-11-27 19:34:58 EST
This bug is currently attached to errata RHBA-2013:15291. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to 
minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag.

Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information:

* Cause: What actions or circumstances cause this bug to present.
* Consequence: What happens when the bug presents.
* Fix: What was done to fix the bug.
* Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore')

Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug.

For further details on the Cause, Consequence, Fix, Result format please refer to:

https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes 

Thanks in advance.
Comment 81 errata-xmlrpc 2014-01-21 11:26:22 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0040.html

Note You need to log in before you can comment on or make changes to this bug.