Bug 466895 - pygrub uses OS cached data
Summary: pygrub uses OS cached data
Keywords:
Status: CLOSED DUPLICATE of bug 466681
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: xen
Version: 5.2
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Xen Maintainance List
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 446771
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-10-14 12:11 UTC by Kostas Georgiou
Modified: 2009-12-14 21:26 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-10-14 12:19:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Kostas Georgiou 2008-10-14 12:11:59 UTC
+++ This bug was initially created as a clone of Bug #446771 +++

After I upgraded to the latest kernel and rebooted my pv f9 install pugrub only
showed me the old grub.conf file.
A few dozen reboots later, reading the pygrub code several times without any
luck I tried vm.drop_caches=1 which solved the problem.

I am accessing a full disk directly from xen which I suspect is the cause with: 
disk = [ 'phy:/dev/disk/by-id/scsi-SATA_Maxtor_6B200M0_XXXXXXX,xvda,w', ]

Using O_DIRECT in pygrub is probably the right solution but a quick test showed
that it fails in 'fs = fsimage.open(file, get_fs_offset(file))' I guess because
fsimage doesn't align it's reads on 512 byte boundaries :(

--- Additional comment from a.rogge on 2008-10-11 20:26:23 EDT ---

This also happes on RHEL 5.2.
I was hit on that issue while migrating some machines into VMs.

Because of the following scenario this bug is actually security relevant:
When you install a new kernel (which probably fixes a security issue) and reboot the system it will eventually boot with the old (and vulnerable) kernel.
Especially when installing a new system and you immediately apply the patches after the install is finished, you're likely to trigger this behavior.

It gets even worse in a clustered environment when you're running on plain volumes (i.e. LVs from a clustered VG). Usually the host-nodes have incoherent caches for these volumes which might lead to a vm booting kernel A when started on node1 and kernel B when started on node2.

Comment 1 Chris Lalancette 2008-10-14 12:19:17 UTC

*** This bug has been marked as a duplicate of bug 466681 ***


Note You need to log in before you can comment on or make changes to this bug.