Bug 629056 - RAID + LUKS + LVM - boot problem
Summary: RAID + LUKS + LVM - boot problem
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: anaconda
Version: rawhide
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Anaconda Maintenance Team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-08-31 19:27 UTC by Gerhard Wiesinger
Modified: 2010-10-08 20:24 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2010-10-08 20:24:30 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Gerhard Wiesinger 2010-08-31 19:27:15 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Gerhard Wiesinger 2010-08-31 20:06:16 UTC
Sorry first comment was submitted to fast :-(

Description of problem:
In a RAID + LUKS + LVM config - Linux doesn't boot after upgrade to FC12, worked well in FC11

/dev/sdc (luks-0c66fc77-a8a0-49e6-b96c-55e886a91f09) is password protected:
1.) but should be /dev/md0
2.) password is not accepted as wrong partition/device is used

Problem in details:
a.) On RAID setups udev recognizes wrongly a crypto partion (in this case /dev/sdc) because I guess data are part there.
b.) /etc/crypttab isn't used for detection of correct root partion and for asking for password (or not e.g. when key file is specified)


Version-Release number of selected component (if applicable):
dracut-005-2.fc12.noarch
Also newest git version doesn't have any improvements to this.

How reproducible:
Setup looks like:
1.) RAID5(sda, sdb, sdc) => /dev/md0 => LUKS(0) => LVM => root partition
2.) RAID5(sdd, sde, sdf) => /dev/md1 => LUKS(1) => LVM => non root partition
3.) RAID1(sdg, sdh) => /boot

===============================================
dmsetup ls --tree
VolGroup01-backup (253:4)
 └─luks-md1 (253:3)
    └─ (9:1)
VolGroup00-swap (253:2)
 └─luks-md0 (253:0)
    └─ (9:0)
VolGroup00-root (253:1)
 └─luks-md0 (253:0)
    └─ (9:0)
===============================================
blkid | sort
/dev/dm-0: UUID="8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW" TYPE="LVM2_member"
/dev/mapper/luks-md0: UUID="8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW" TYPE="LVM2_member"
/dev/mapper/luks-md1: UUID="xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy" TYPE="LVM2_member"
/dev/mapper/VolGroup00-root: UUID="b728dba5-304d-4ee2-8616-20d9e921443f" TYPE="ext3"
/dev/mapper/VolGroup00-swap: TYPE="swap" UUID="04e0fa21-c532-4d73-ade4-343f74ba823c"
/dev/mapper/VolGroup01-backup: UUID="e86136a2-11b0-4855-98c1-ea5cfd38dd67" TYPE="ext3"
/dev/md0: UUID="0c66fc77-a8a0-49e6-b96c-55e886a91f09" TYPE="crypto_LUKS"
/dev/md1: UUID="aaa6ec1e-e54f-4864-a793-cc95d5fa59d2" TYPE="crypto_LUKS"
/dev/md2: LABEL="/boot" UUID="b2186262-36ff-4e25-aa6b-19f1b8c4c4d7" TYPE="ext3" SEC_TYPE="ext2"
/dev/sda1: UUID="56e9bde4-5cd1-630d-188c-22a54c1c8c37" TYPE="linux_raid_member"
/dev/sdb1: UUID="56e9bde4-5cd1-630d-188c-22a54c1c8c37" TYPE="linux_raid_member"
/dev/sdc1: UUID="56e9bde4-5cd1-630d-188c-22a54c1c8c37" TYPE="linux_raid_member"
/dev/sdd1: UUID="eb295015-e20b-e133-15db-5c9130487300" TYPE="linux_raid_member"
/dev/sde1: UUID="eb295015-e20b-e133-15db-5c9130487300" TYPE="linux_raid_member"
/dev/sdf1: UUID="eb295015-e20b-e133-15db-5c9130487300" TYPE="linux_raid_member"
/dev/sdg1: UUID="51e8e983-8786-b868-dbc3-87357e1a9534" TYPE="linux_raid_member"
/dev/sdh1: UUID="51e8e983-8786-b868-dbc3-87357e1a9534" TYPE="linux_raid_member"
/dev/VolGroup00/root: UUID="b728dba5-304d-4ee2-8616-20d9e921443f" TYPE="ext3"
/dev/VolGroup00/swap: TYPE="swap" UUID="04e0fa21-c532-4d73-ade4-343f74ba823c"
===============================================
blkid -o udev
ID_FS_UUID=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_UUID_ENC=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_TYPE=ext3
ID_FS_UUID=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_UUID_ENC=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_TYPE=crypto_LUKS
ID_FS_UUID=aaa6ec1e-e54f-4864-a793-cc95d5fa59d2
ID_FS_UUID_ENC=aaa6ec1e-e54f-4864-a793-cc95d5fa59d2
ID_FS_TYPE=crypto_LUKS
ID_FS_UUID=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_UUID_ENC=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_TYPE=LVM2_member
ID_FS_TYPE=swap
ID_FS_UUID=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID_ENC=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_UUID_ENC=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_TYPE=ext3
ID_FS_TYPE=swap
ID_FS_UUID=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID_ENC=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_UUID_ENC=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_TYPE=LVM2_member
ID_FS_LABEL=/boot
ID_FS_LABEL_ENC=\x2fboot
ID_FS_UUID=b2186262-36ff-4e25-aa6b-19f1b8c4c4d7
ID_FS_UUID_ENC=b2186262-36ff-4e25-aa6b-19f1b8c4c4d7
ID_FS_TYPE=ext3
ID_FS_SEC_TYPE=ext2
ID_FS_UUID=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_UUID_ENC=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_UUID_ENC=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_UUID_ENC=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=eb295015-e20b-e133-15db-5c9130487300
ID_FS_UUID_ENC=eb295015-e20b-e133-15db-5c9130487300
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=eb295015-e20b-e133-15db-5c9130487300
ID_FS_UUID_ENC=eb295015-e20b-e133-15db-5c9130487300
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_UUID_ENC=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=eb295015-e20b-e133-15db-5c9130487300
ID_FS_UUID_ENC=eb295015-e20b-e133-15db-5c9130487300
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_UUID_ENC=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy
ID_FS_UUID_ENC=xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy
ID_FS_TYPE=LVM2_member
ID_FS_UUID=e86136a2-11b0-4855-98c1-ea5cfd38dd67
ID_FS_UUID_ENC=e86136a2-11b0-4855-98c1-ea5cfd38dd67
ID_FS_TYPE=ext3
===============================================
cat /etc/dracut.conf
# Sample dracut config file

# Specific list of dracut modules to use
#dracutmodules=""

# Dracut modules to omit
#omit_dracutmodules=""

# Dracut modules to add to the default
#add_dracutmodules=""

# additional kernel modules to the default
#add_drivers=""

# list of kernel filesystem modules to be included in the generic initramfs
#filesystems=""

# build initrd only to boot current hardware
#hostonly="yes"
#

# install local /etc/mdadm.conf
mdadmconf="yes"

# install local /etc/lvm/lvm.conf
lvmconf="yes"
===============================================
pvdisplay
  --- Physical volume ---
  PV Name               /dev/dm-3
  VG Name               VolGroup01
  PV Size               1.82 TB / not usable 21.50 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              59616
  Free PE               512
  Allocated PE          59104
  PV UUID               xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy

  --- Physical volume ---
  PV Name               /dev/dm-0
  VG Name               VolGroup00
  PV Size               1.82 TB / not usable 21.50 MB
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              59616
  Free PE               486
  Allocated PE          59130
  PV UUID               8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
===============================================
vgdisplay
  --- Volume group ---
  VG Name               VolGroup01
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.82 TB
  PE Size               32.00 MB
  Total PE              59616
  Alloc PE / Size       59104 / 1.80 TB
  Free  PE / Size       512 / 16.00 GB
  VG UUID               YMgUf1-xV98-nXPa-HeTC-0FIY-Hhrz-Bj1Jow

  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.82 TB
  PE Size               32.00 MB
  Total PE              59616
  Alloc PE / Size       59130 / 1.80 TB
  Free  PE / Size       486 / 15.19 GB
  VG UUID               AClbX3-8aZv-D35R-yxJa-eZ3w-YStG-pjh8eD
===============================================
lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup01/backup
  VG Name                VolGroup01
  LV UUID                IX02Po-rqUo-DJuB-wEEu-xEjh-00WS-yfdrwj
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.80 TB
  Current LE             59104
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     2048
  Block device           253:4

  --- Logical volume ---
  LV Name                /dev/VolGroup00/root
  VG Name                VolGroup00
  LV UUID                VdXtdV-lDMD-ukpy-F6SB-31Yr-5Ppc-eedspa
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.80 TB
  Current LE             58880
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                /dev/VolGroup00/swap
  VG Name                VolGroup00
  LV UUID                q8oV3K-QwGM-HC5L-Plkv-glHQ-eixc-TeAlqO
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                7.81 GB
  Current LE             250
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
===============================================


Steps to Reproduce:
1. Implement RAID + LUKS + LVM setup as discussed above
2. try to boot

Actual results:
/dev/sdc (luks-0c66fc77-a8a0-49e6-b96c-55e886a91f09) is password protected:
On a modified script for debugging purpose and exit before queried for password the following output occours:
cryptroot-ask.sh: Param1=/dev/sdc
cryptroot-ask.sh: Param2=luks-0c66fc77-a8a0-49e6-b96c-55e886a91f09
cryptroot-ask.sh: LUKS=
cryptroot-ask.sh: ask=1
cryptroot-ask.sh: luksname=luks-0c66fc77-a8a0-49e6-b96c-55e886a91f09
cryptroot-ask.sh: device=/dev/sdc
cryptsetup exit=0
cryptroot-ask.sh: Param1=/dev/md0
cryptroot-ask.sh: Param2=luks-0c66fc77-a8a0-49e6-b96c-55e886a91f09
cryptroot-ask.sh: LUKS=
cryptroot-ask.sh: ask=1
cryptroot-ask.sh: luksname=luks-md0
cryptroot-ask.sh: device=/dev/md0
cryptsetup exit=0
cryptroot-ask.sh: Param1=/dev/md1
cryptroot-ask.sh: Param2=luks-aaa6ec1e-e54f-4864-a793-cc95d5fa59d2
cryptroot-ask.sh: LUKS=
cryptroot-ask.sh: ask=1
cryptroot-ask.sh: luksname=luks-md1
cryptroot-ask.sh: device=/dev/md1
cryptsetup exit=0

Expected results:
/dev/md0 (luks-0c66fc77-a8a0-49e6-b96c-55e886a91f09) is password protected:
(/dev/md1 is not queried for any password since there is a key file configured)

Additional info:
Provided when necessary

Comment 2 Gerhard Wiesinger 2010-08-31 20:07:13 UTC
Similar report was closed previously:
https://bugzilla.redhat.com/show_bug.cgi?id=520109

Comment 3 Gerhard Wiesinger 2010-09-01 05:00:24 UTC
Some additional information:
===============================================
cat /etc/crypttab
luks-md0                /dev/md0        none
luks-md1                /dev/md1        /etc/crypto-key-md1
===============================================
Working nash init from FC11:

#!/bin/nash

mount -t proc /proc /proc
setquiet
echo Mounting proc filesystem
echo Mounting sysfs filesystem
mount -t sysfs /sys /sys
echo Creating /dev
mount -o mode=0755 -t tmpfs /dev /dev
mkdir /dev/pts
mount -t devpts -o gid=5,mode=620 /dev/pts /dev/pts
mkdir /dev/shm
mkdir /dev/mapper
echo Creating initial device nodes
mknod /dev/null c 1 3
mknod /dev/zero c 1 5
mknod /dev/systty c 4 0
mknod /dev/tty c 5 0
mknod /dev/console c 5 1
mknod /dev/ptmx c 5 2
mknod /dev/fb c 29 0
mknod /dev/hvc0 c 229 0
mknod /dev/tty0 c 4 0
mknod /dev/tty1 c 4 1
mknod /dev/tty2 c 4 2
mknod /dev/tty3 c 4 3
mknod /dev/tty4 c 4 4
mknod /dev/tty5 c 4 5
mknod /dev/tty6 c 4 6
mknod /dev/tty7 c 4 7
mknod /dev/tty8 c 4 8
mknod /dev/tty9 c 4 9
mknod /dev/tty10 c 4 10
mknod /dev/tty11 c 4 11
mknod /dev/tty12 c 4 12
mknod /dev/ttyS0 c 4 64
mknod /dev/ttyS1 c 4 65
mknod /dev/ttyS2 c 4 66
mknod /dev/ttyS3 c 4 67
daemonize --ignore-missing /bin/plymouthd
echo Setting up hotplug.
hotplug
echo "Loading i2c-core module"
modprobe -q i2c-core
echo "Loading i2c-algo-bit module"
modprobe -q i2c-algo-bit
echo "Loading drm module"
modprobe -q drm
echo "Loading nouveau module"
modprobe -q nouveau
/lib/udev/console_init tty0
plymouth --show-splash
echo Creating block device nodes.
mkblkdevs
echo Creating character device nodes.
mkchardevs
echo "Loading dm-crypt module"
modprobe -q dm-crypt
echo "Loading aes module"
modprobe -q aes
echo "Loading cbc module"
modprobe -q cbc
echo "Loading sha256 module"
modprobe -q sha256
echo "Loading raid456 module"
modprobe -q raid456
echo Making device-mapper control node
mkdmnod
modprobe scsi_wait_scan
rmmod scsi_wait_scan
mkblkdevs
mdadm -As --auto=yes --run /dev/md0
setDeviceEnv LUKSUUID /dev/md0
echo Setting up disk encryption: $LUKSUUID
buildEnv LUKSUUID cryptsetup luksOpen $LUKSUUID luks-md0
plymouth ask-for-password --command $LUKSUUID
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure  VolGroup00
resume /dev/VolGroup00/swap
echo Creating root device.
mkrootdev -t ext3 -o defaults,ro /dev/VolGroup00/root
echo Mounting root filesystem.
mount /sysroot
cond -ne 0 plymouth --hide-splash
echo Setting up other filesystems.
setuproot
loadpolicy
plymouth --newroot=/sysroot
echo Switching to new root and running init.
switchroot
echo Booting has failed.
sleep -1
===============================================

So i think same detection logic as in FC11 should be implemented again.

Comment 4 Gerhard Wiesinger 2010-09-01 05:02:25 UTC
I tried a fix with detection of cryptsetup isLuks but it didn't work due to wrong detection of isLuks in a RAID5 and I guess also RAID1 and maybe other setups.

blkid -o device | sort | while read name; do cryptsetup isLuks $name;echo $name: $[$?==0]; done
/dev/dm-0: 0
/dev/mapper/luks-md0: 0
/dev/mapper/luks-md1: 0
/dev/mapper/VolGroup00-root: 0
/dev/mapper/VolGroup00-swap: 0
/dev/mapper/VolGroup01-backup: 0
/dev/md0: 1
/dev/md1: 1
/dev/md2: 0
/dev/sda1: 1
/dev/sdb1: 0
/dev/sdc1: 1
/dev/sdd1: 1
/dev/sde1: 0
/dev/sdf1: 1
/dev/sdg1: 0
/dev/sdh1: 0
/dev/VolGroup00/root: 0
/dev/VolGroup00/swap: 0

Comment 5 Gerhard Wiesinger 2010-09-18 06:30:57 UTC
Workaround for FC12:
/sbin/new-kernel-pkg --package kernel-2.6.32.21-166.fc12.x86_64 --mkinitrd --depmod --install 2.6.32.21-166.fc12.x86_64

Workaround doesn't work anymore for FC13 since package mkinitrd-6.0.93-1.fc12.x86_64 has been removed in FC13 and /sbin/lsinitrd and /sbin/mkinitrd have been replace by dracut wrappers!

=> /sbin/new-kernel-pkg --package kernel-2.6.34.6-54.fc13.x86_64 --mkinitrd --depmod --install 2.6.34.6-54.fc13.x86_64
fails due to wrong init script

Even tried to rebuild mkinitrd-6.0.93-1.fc12.x86_64 sources:
1.) Compile & install ok
2.) init script isn't correct (without cryptsetup)

Details:
/sbin/new-kernel-pkg -v --package kernel-2.6.34.6-54.fc13.x86_64 --mkinitrd --depmod --install 2.6.34.6-54.fc13.x86_64
initrdfile is /boot/initrd-2.6.34.6-54.fc13.x86_64.img
running depmod for 2.6.34.6-54.fc13.x86_64
creating initrd: /sbin/mkinitrd --allow-missing -f /boot/initrd-2.6.34.6-54.fc13.x86_64.img 2.6.34.6-54.fc13.x86_64
WARNING: /sys/devices/virtual/block/dm-1 is a not a block sysfs path, skipping
WARNING: /sys/devices/virtual/block/dm-2 is a not a block sysfs path, skipping
found /boot/initrd-2.6.34.6-54.fc13.x86_64.img and using it with grubby
adding 2.6.34.6-54.fc13.x86_64 to /boot/grub/grub.conf
/etc/lilo.conf does not exist, not running grubby

Please fix dracut soon to boot the correct kernel!

Comment 6 Harald Hoyer 2010-09-18 09:21:56 UTC
So, I think, we have to fix blkid or we have to add a quirk in dracut to override blkid's information.

Comment 7 Gerhard Wiesinger 2010-09-20 05:27:17 UTC
What do you think is wrong with blkid? (At a fast look I didn't see any inconsistences?)

Comment 8 Harald Hoyer 2010-09-20 11:35:44 UTC
> /dev/sdc (luks-0c66fc77-a8a0-49e6-b96c-55e886a91f09) is password protected:
> 1.) but should be /dev/md0
> 2.) password is not accepted as wrong partition/device is used

(In reply to comment #7)
> What do you think is wrong with blkid? (At a fast look I didn't see any
> inconsistences?)

Can you check the output of:

# /sbin/blkid -o udev -p /dev/sdc | grep ID_FS_TYPE

it should not return 

ID_FS_TYPE=crypto_LUKS

but 

ID_FS_TYPE=linux_raid_member

as far as I read your report

Comment 9 Gerhard Wiesinger 2010-09-20 17:16:12 UTC
Output in previous comments was with kernel 2.6.32.21-166.fc12.x86_64 and FC12

Output below is different with kernel 2.6.32.21-166.fc12.x86_64 and FC13:
===================================================
/sbin/blkid -o udev -p /dev/sdc | grep ID_FS_TYPE
ID_FS_TYPE=crypto_LUKS

Full output:
/sbin/blkid -o udev -p /dev/sdc
ID_FS_UUID=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_UUID_ENC=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_VERSION=256
ID_FS_TYPE=crypto_LUKS
ID_FS_USAGE=crypto
ID_PART_TABLE_TYPE=dos
===================================================
blkid | sort
/dev/block/253:0: UUID="8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW" TYPE="LVM2_member"
/dev/block/253:1: UUID="b728dba5-304d-4ee2-8616-20d9e921443f" TYPE="ext3"
/dev/block/8:1: UUID="56e9bde4-5cd1-630d-188c-22a54c1c8c37" TYPE="linux_raid_member"
/dev/block/9:0: UUID="0c66fc77-a8a0-49e6-b96c-55e886a91f09" TYPE="crypto_LUKS"
/dev/dm-0: UUID="8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW" TYPE="LVM2_member"
/dev/mapper/luks-md0: UUID="8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW" TYPE="LVM2_member"
/dev/mapper/luks-md1: UUID="xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy" TYPE="LVM2_member"
/dev/mapper/VolGroup00-root: UUID="b728dba5-304d-4ee2-8616-20d9e921443f" TYPE="ext3"
/dev/mapper/VolGroup00-swap: TYPE="swap" UUID="04e0fa21-c532-4d73-ade4-343f74ba823c"
/dev/mapper/VolGroup01-backup: UUID="e86136a2-11b0-4855-98c1-ea5cfd38dd67" TYPE="ext3"
/dev/md0: UUID="0c66fc77-a8a0-49e6-b96c-55e886a91f09" TYPE="crypto_LUKS"
/dev/md1: UUID="aaa6ec1e-e54f-4864-a793-cc95d5fa59d2" TYPE="crypto_LUKS"
/dev/md2: LABEL="/boot" UUID="b2186262-36ff-4e25-aa6b-19f1b8c4c4d7" TYPE="ext3" SEC_TYPE="ext2"
/dev/sdb1: UUID="56e9bde4-5cd1-630d-188c-22a54c1c8c37" TYPE="linux_raid_member"
/dev/sdc1: UUID="56e9bde4-5cd1-630d-188c-22a54c1c8c37" TYPE="linux_raid_member"
/dev/sdd1: UUID="eb295015-e20b-e133-15db-5c9130487300" TYPE="linux_raid_member"
/dev/sde1: UUID="eb295015-e20b-e133-15db-5c9130487300" TYPE="linux_raid_member"
/dev/sdf1: UUID="eb295015-e20b-e133-15db-5c9130487300" TYPE="linux_raid_member"
/dev/sdg1: UUID="51e8e983-8786-b868-dbc3-87357e1a9534" TYPE="linux_raid_member"
/dev/sdh1: UUID="51e8e983-8786-b868-dbc3-87357e1a9534" TYPE="linux_raid_member"
/dev/VolGroup00/root: UUID="b728dba5-304d-4ee2-8616-20d9e921443f" TYPE="ext3"
/dev/VolGroup00/swap: TYPE="swap" UUID="04e0fa21-c532-4d73-ade4-343f74ba823c"
===================================================
blkid -o udev
ID_FS_UUID=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_UUID_ENC=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_TYPE=ext3
ID_FS_UUID=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_UUID_ENC=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_TYPE=crypto_LUKS
ID_FS_UUID=aaa6ec1e-e54f-4864-a793-cc95d5fa59d2
ID_FS_UUID_ENC=aaa6ec1e-e54f-4864-a793-cc95d5fa59d2
ID_FS_TYPE=crypto_LUKS
ID_FS_UUID=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_UUID_ENC=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_TYPE=LVM2_member
ID_FS_TYPE=swap
ID_FS_UUID=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID_ENC=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_UUID_ENC=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_TYPE=ext3
ID_FS_TYPE=swap
ID_FS_UUID=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID_ENC=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_UUID_ENC=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_TYPE=LVM2_member
ID_FS_LABEL=/boot
ID_FS_LABEL_ENC=\x2fboot
ID_FS_UUID=b2186262-36ff-4e25-aa6b-19f1b8c4c4d7
ID_FS_UUID_ENC=b2186262-36ff-4e25-aa6b-19f1b8c4c4d7
ID_FS_TYPE=ext3
ID_FS_SEC_TYPE=ext2
ID_FS_UUID=eb295015-e20b-e133-15db-5c9130487300
ID_FS_UUID_ENC=eb295015-e20b-e133-15db-5c9130487300
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=eb295015-e20b-e133-15db-5c9130487300
ID_FS_UUID_ENC=eb295015-e20b-e133-15db-5c9130487300
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_UUID_ENC=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=eb295015-e20b-e133-15db-5c9130487300
ID_FS_UUID_ENC=eb295015-e20b-e133-15db-5c9130487300
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_UUID_ENC=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy
ID_FS_UUID_ENC=xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy
ID_FS_TYPE=LVM2_member
ID_FS_UUID=e86136a2-11b0-4855-98c1-ea5cfd38dd67
ID_FS_UUID_ENC=e86136a2-11b0-4855-98c1-ea5cfd38dd67
ID_FS_TYPE=ext3
ID_FS_UUID=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_UUID_ENC=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_TYPE=ext3
ID_FS_UUID=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_UUID_ENC=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_TYPE=LVM2_member
ID_FS_UUID=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_UUID_ENC=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_TYPE=crypto_LUKS
ID_FS_UUID=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_UUID_ENC=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_UUID_ENC=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_UUID_ENC=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_TYPE=linux_raid_member
===================================================

The only thing I can imagine: Once I rebuilt the raid on a failed /dev/sdc with /dev/sdc and not with /dev/sdc1. Maybe some LUKS header is still there. I will try to rebuild remove /dev/sdc, /dev/sdc1 from the RAID set and then overwrite first GB with "0" and then readd it.

Comment 10 Gerhard Wiesinger 2010-09-20 17:29:57 UTC
OK, after removing and readding output changed to expected result:
/sbin/blkid -o udev -p /dev/sdc
ID_PART_TABLE_TYPE=dos

RAID rebuild is still in progress. I will try to boot when completed (tomorrow).

Comment 11 Gerhard Wiesinger 2010-09-21 05:43:41 UTC
RAID rebuild completed. Kernel boots well :-)
details:
rpm -e mkinitrd-6.0.93-2.fc13.x86_64
rpm -Va dracut
# OK
/sbin/new-kernel-pkg --package kernel-2.6.34.6-54.fc13.x86_64 --mkinitrd --dracut --depmod --install 2.6.34.6-54.fc13.x86_64

more details:
===================================================
Output still:
/sbin/blkid -o udev -p /dev/sdc
ID_PART_TABLE_TYPE=dos
===================================================
blkid | sort
/dev/block/253:0: UUID="8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW" TYPE="LVM2_member"
/dev/block/8:1: UUID="56e9bde4-5cd1-630d-188c-22a54c1c8c37" TYPE="linux_raid_member"
/dev/block/9:0: UUID="0c66fc77-a8a0-49e6-b96c-55e886a91f09" TYPE="crypto_LUKS"
/dev/dm-0: UUID="8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW" TYPE="LVM2_member"
/dev/mapper/luks-md0: UUID="8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW" TYPE="LVM2_member"
/dev/mapper/luks-md1: UUID="xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy" TYPE="LVM2_member"
/dev/mapper/VolGroup00-root: UUID="b728dba5-304d-4ee2-8616-20d9e921443f" TYPE="ext3"
/dev/mapper/VolGroup00-swap: TYPE="swap" UUID="04e0fa21-c532-4d73-ade4-343f74ba823c"
/dev/mapper/VolGroup01-backup: UUID="e86136a2-11b0-4855-98c1-ea5cfd38dd67" TYPE="ext3"
/dev/md0: UUID="0c66fc77-a8a0-49e6-b96c-55e886a91f09" TYPE="crypto_LUKS"
/dev/md1: UUID="aaa6ec1e-e54f-4864-a793-cc95d5fa59d2" TYPE="crypto_LUKS"
/dev/md2: LABEL="/boot" UUID="b2186262-36ff-4e25-aa6b-19f1b8c4c4d7" TYPE="ext3" SEC_TYPE="ext2"
/dev/sdb1: UUID="56e9bde4-5cd1-630d-188c-22a54c1c8c37" TYPE="linux_raid_member"
/dev/sdc1: UUID="56e9bde4-5cd1-630d-188c-22a54c1c8c37" TYPE="linux_raid_member"
/dev/sdd1: UUID="eb295015-e20b-e133-15db-5c9130487300" TYPE="linux_raid_member"
/dev/sde1: UUID="eb295015-e20b-e133-15db-5c9130487300" TYPE="linux_raid_member"
/dev/sdf1: UUID="eb295015-e20b-e133-15db-5c9130487300" TYPE="linux_raid_member"
/dev/sdg1: UUID="51e8e983-8786-b868-dbc3-87357e1a9534" TYPE="linux_raid_member"
/dev/sdh1: UUID="51e8e983-8786-b868-dbc3-87357e1a9534" TYPE="linux_raid_member"
/dev/VolGroup00/root: UUID="b728dba5-304d-4ee2-8616-20d9e921443f" TYPE="ext3"
/dev/VolGroup00/swap: TYPE="swap" UUID="04e0fa21-c532-4d73-ade4-343f74ba823c"
===================================================
blkid -o udev
ID_FS_UUID=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_UUID_ENC=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_TYPE=ext3
ID_FS_UUID=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_UUID_ENC=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_TYPE=crypto_LUKS
ID_FS_UUID=aaa6ec1e-e54f-4864-a793-cc95d5fa59d2
ID_FS_UUID_ENC=aaa6ec1e-e54f-4864-a793-cc95d5fa59d2
ID_FS_TYPE=crypto_LUKS
ID_FS_UUID=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_UUID_ENC=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_TYPE=LVM2_member
ID_FS_TYPE=swap
ID_FS_UUID=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID_ENC=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_UUID_ENC=b728dba5-304d-4ee2-8616-20d9e921443f
ID_FS_TYPE=ext3
ID_FS_TYPE=swap
ID_FS_UUID=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID_ENC=04e0fa21-c532-4d73-ade4-343f74ba823c
ID_FS_UUID=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_UUID_ENC=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_TYPE=LVM2_member
ID_FS_LABEL=/boot
ID_FS_LABEL_ENC=\x2fboot
ID_FS_UUID=b2186262-36ff-4e25-aa6b-19f1b8c4c4d7
ID_FS_UUID_ENC=b2186262-36ff-4e25-aa6b-19f1b8c4c4d7
ID_FS_TYPE=ext3
ID_FS_SEC_TYPE=ext2
ID_FS_UUID=eb295015-e20b-e133-15db-5c9130487300
ID_FS_UUID_ENC=eb295015-e20b-e133-15db-5c9130487300
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=eb295015-e20b-e133-15db-5c9130487300
ID_FS_UUID_ENC=eb295015-e20b-e133-15db-5c9130487300
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_UUID_ENC=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=eb295015-e20b-e133-15db-5c9130487300
ID_FS_UUID_ENC=eb295015-e20b-e133-15db-5c9130487300
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_UUID_ENC=51e8e983-8786-b868-dbc3-87357e1a9534
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy
ID_FS_UUID_ENC=xgsva5-yNEJ-VRqc-io0m-HY5G-GggP-zHgWfy
ID_FS_TYPE=LVM2_member
ID_FS_UUID=e86136a2-11b0-4855-98c1-ea5cfd38dd67
ID_FS_UUID_ENC=e86136a2-11b0-4855-98c1-ea5cfd38dd67
ID_FS_TYPE=ext3
ID_FS_UUID=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_UUID_ENC=8kLPOe-HWH2-cFI8-fgSy-kVkY-dIeY-CAsIsW
ID_FS_TYPE=LVM2_member
ID_FS_UUID=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_UUID_ENC=0c66fc77-a8a0-49e6-b96c-55e886a91f09
ID_FS_TYPE=crypto_LUKS
ID_FS_UUID=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_UUID_ENC=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_UUID_ENC=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_TYPE=linux_raid_member
ID_FS_UUID=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_UUID_ENC=56e9bde4-5cd1-630d-188c-22a54c1c8c37
ID_FS_TYPE=linux_raid_member
===================================================

Comment 12 Harald Hoyer 2010-09-21 08:34:04 UTC
so, this bug could be assigned to the installer, component "anaconda", which should clear old raid/crypto signatures at the time it repartitions your system.

Comment 13 Gerhard Wiesinger 2010-09-24 04:58:53 UTC
That means to clear at the beginning (~100M) and at the end at least ~64k or a little bit more (RAID 0.9 header)

Comment 14 David Lehman 2010-10-08 20:24:30 UTC
Anaconda already does this when you create or destroy devices or their formatting. We will not go looking for data to erase if we are not asked to modify a device. This is the user's responsibility.

If I misunderstand the issue please reopen it with a description of what you think is wrong.


Note You need to log in before you can comment on or make changes to this bug.