Description of problem: mounting an encrypted intel bios raid array hangs after the following updates : mdadm-3.2.6-1.fc18.x86_64 to mdadm-3.2.6-12.fc18.x86_64 Version-Release number of selected component (if applicable): see above How reproducible: 100% Steps to Reproduce: 1. install fc18 on server with existing array (that was running fc17). 2. update the mdadm package 3. try to mount filesystem Actual results: mount command hangs. no errors in any logs. Expected results: filesystem should mount Additional info: i can opens the luks device ok. other info that may be useful : mdadm -D /dev/md127 /dev/md127: Version : imsm Raid Level : container Total Devices : 4 Working Devices : 4 UUID : 3936b762:57f0fc35:dcb179b9:f5bb1675 Member Arrays : /dev/md/Volume0_0 Number Major Minor RaidDevice 0 8 32 - /dev/sdc 1 8 48 - /dev/sdd 2 8 64 - /dev/sde 3 8 16 - /dev/sdb mdadm -D /dev/md126 /dev/md126: Container : /dev/md/imsm0, member 0 Raid Level : raid5 Array Size : 2930280448 (2794.53 GiB 3000.61 GB) Used Dev Size : 976760320 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 4 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-asymmetric Chunk Size : 64K UUID : 67651b3e:5d9144be:c79d629e:f907af03 Number Major Minor RaidDevice State 3 8 16 0 active sync /dev/sdb 2 8 32 1 active sync /dev/sdc 1 8 48 2 active sync /dev/sdd 0 8 64 3 active sync /dev/sde /etc/crypttab luks-7425e393-21ac-4f25-be4a-458eb968aaaf UUID=7425e393-21ac-4f25-be4a-458eb968aaaf none /etc/fstab /dev/mapper/luks-7425e393-21ac-4f25-be4a-458eb968aaaf /data ext4 defaults,x-systemd.device-timeout=0 1 2 dumpe2fs -h /dev/dm-4 dumpe2fs 1.42.5 (29-Jul-2012) Filesystem volume name: <none> Last mounted on: /data Filesystem UUID: 6c305a06-927c-438c-ad4d-9ba5e84acaec Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 183148544 Block count: 732569563 Reserved block count: 36628478 Free blocks: 404976428 Free inodes: 182924049 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 849 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 16 RAID stripe width: 48 Flex block group size: 16 Filesystem created: Sat Jan 28 05:03:11 2012 Last mount time: Sun Feb 3 12:50:06 2013 Last write time: Sun Feb 3 12:50:06 2013 Mount count: 15 Maximum mount count: 23 Last checked: Mon Jan 28 20:14:15 2013 Check interval: 15552000 (6 months) Next check after: Sat Jul 27 21:14:15 2013 Lifetime writes: 5224 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 13f95ee5-1ba6-4574-8351-c305c5e32a53 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x007ae9d8 Journal start: 1
please note that updating systemd causes the same problem, see bug : https://bugzilla.redhat.com/show_bug.cgi?id=907151 also, as per this thread : http://forums.fedoraforum.org/showthread.php?t=288148 at least one other user is having the same problem.
*** This bug has been marked as a duplicate of bug 907151 ***