Bug 251326 - Kernel panic when rebooting with existing lvm snapshots
Summary: Kernel panic when rebooting with existing lvm snapshots
Status: CLOSED DUPLICATE of bug 244215
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel   
(Show other bugs)
Version: 5.0
Hardware: i386 Linux
Target Milestone: ---
: ---
Assignee: Milan Broz
QA Contact: Martin Jenner
Depends On:
TreeView+ depends on / blocked
Reported: 2007-08-08 12:02 UTC by Sven Hoexter
Modified: 2013-03-01 04:05 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-09-13 16:46:20 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

Description Sven Hoexter 2007-08-08 12:02:49 UTC
Description of problem:
I receive the following when rebooting while there is still a lvm snapshot
existing and mounted:

Red Hat nash ...
Reading all physical volumes. This may take a while ...
Found volume group "vg00" using metdata type lvm2
device-mapper: table: 253:3: snapshot: Failed to read snapshot metadata
device-mapper: reload ioctl failed: Invalid argument
2 logical volume(s) in volume group "vg00" now active
nash received SIGSEV! Backtrace:
Kernel panic - not syncing: Attempted to kill init!

Note: I found this problem on HP G5 server running RHEL 5 with a XEN setup. The
reason for running into the problem seems to be that I made a mistake in my
backup script and it went unnoticed that the lvm snapshots were still existing
and mounted. Rebooting the dom0 because of a kernel upgrade spotted the problem.
Rebooting with the former kernel produced the same problem.
I then rebooted with a Live CD where lvdisplay showed the existing snapshot.
Removing the snapshot solved the problem.

Because of a missing spare RHEL 5 licence and a similar system I reproduced this
problem on a laptop running CentOS 5.

Some further examination revelead that it's impossible to activate the lv when
the snapshot still exists when working with the Live CD.
lvchange -ay /dev/vg00/dom0
device-mapper: table: 254:1: snapshot-origin: unknown target tzpe
device-mapper: ioctl: error adding target to table
device-mapper: reload ioctl failed: Invalid argument

I'm not sure if the problem origins in the lvm userspace tools or in the kernel.
It might even be a problem related to the fact the snapshot is outdated after
the reboot and got invalid.

How reproducible:
Seems to be an existing lvm snapshot which is still mounted.

Steps to Reproduce:
1. Install a system with / on lvm.
2. Boot, create a snanshot of the lv holding /.
3. Mount the lv snapshot
4. Reboot.

Comment 1 Sven Hoexter 2007-08-08 12:47:37 UTC
Ok I experimented a little bit further and it seems to be the case that the
reactivation of the lv fails when the lv-snapshot is full.

So to reproduce it you've to do the following:
1. Create LV snaphost of / with say 1MB cow space
2. Copy /usr/bin to your home or something similaar
3. Try to reboot

Comment 2 Milan Broz 2007-09-13 16:46:20 UTC
So you effectively overfill snapshot and during reboot system cannot activate
this overfilled snapshot.

Already fixed in current testing kernel.

*** This bug has been marked as a duplicate of 244215 ***

Note You need to log in before you can comment on or make changes to this bug.