This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 466895 - pygrub uses OS cached data
pygrub uses OS cached data
Status: CLOSED DUPLICATE of bug 466681
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: xen (Show other bugs)
5.2
All Linux
low Severity medium
: rc
: ---
Assigned To: Xen Maintainance List
Virtualization Bugs
:
Depends On: 446771
Blocks:
  Show dependency treegraph
 
Reported: 2008-10-14 08:11 EDT by Kostas Georgiou
Modified: 2009-12-14 16:26 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-10-14 08:19:17 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Kostas Georgiou 2008-10-14 08:11:59 EDT
+++ This bug was initially created as a clone of Bug #446771 +++

After I upgraded to the latest kernel and rebooted my pv f9 install pugrub only
showed me the old grub.conf file.
A few dozen reboots later, reading the pygrub code several times without any
luck I tried vm.drop_caches=1 which solved the problem.

I am accessing a full disk directly from xen which I suspect is the cause with: 
disk = [ 'phy:/dev/disk/by-id/scsi-SATA_Maxtor_6B200M0_XXXXXXX,xvda,w', ]

Using O_DIRECT in pygrub is probably the right solution but a quick test showed
that it fails in 'fs = fsimage.open(file, get_fs_offset(file))' I guess because
fsimage doesn't align it's reads on 512 byte boundaries :(

--- Additional comment from a.rogge@solvention.de on 2008-10-11 20:26:23 EDT ---

This also happes on RHEL 5.2.
I was hit on that issue while migrating some machines into VMs.

Because of the following scenario this bug is actually security relevant:
When you install a new kernel (which probably fixes a security issue) and reboot the system it will eventually boot with the old (and vulnerable) kernel.
Especially when installing a new system and you immediately apply the patches after the install is finished, you're likely to trigger this behavior.

It gets even worse in a clustered environment when you're running on plain volumes (i.e. LVs from a clustered VG). Usually the host-nodes have incoherent caches for these volumes which might lead to a vm booting kernel A when started on node1 and kernel B when started on node2.
Comment 1 Chris Lalancette 2008-10-14 08:19:17 EDT

*** This bug has been marked as a duplicate of bug 466681 ***

Note You need to log in before you can comment on or make changes to this bug.