This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1262086 - Cannot access encrypted drive in custom partitioning
Cannot access encrypted drive in custom partitioning
Status: CLOSED INSUFFICIENT_DATA
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
23
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Kernel Maintainer List
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-10 15:03 EDT by Stephen Gallagher
Modified: 2015-09-11 08:48 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-09-11 08:48:42 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
journal of the error (174.37 KB, text/x-vhdl)
2015-09-10 15:03 EDT, Stephen Gallagher
no flags Details

  None (edit)
Description Stephen Gallagher 2015-09-10 15:03:12 EDT
Created attachment 1072337 [details]
journal of the error

Description of problem:
Custom partitioning hangs when trying to access an encrypted LVM partition.

Version-Release number of selected component (if applicable):
anaconda-23.19.2-1.fc23

How reproducible:
Every time

Steps to Reproduce:
1) Install a fresh VM with Fedora Server 23 Beta TC4 taking all defaults (including default filesystem layout) except check the encryption box on the storage pane and enter a password.

2) Verify that the installed system boots and can be logged in.

3) Shut it down and boot from the TC4 install media again. Go into the storage pane and select "I will configure storage myself".

4) Attempt to unlock the encrypted LVM partition. The installer will hang. I was able to get to a virtual terminal and extract the journal, which I'm attaching here, that shows a bug in the attempt to access the filesystem.
Actual results:


Expected results:
Custom partitioning should work properly.

Additional info:

See attached logs.
Comment 1 David Lehman 2015-09-10 15:16:41 EDT
Sep 10 19:12:05 localhost kernel: ------------[ cut here ]------------
Sep 10 19:12:05 localhost kernel: kernel BUG at drivers/block/virtio_blk.c:172!
Sep 10 19:12:05 localhost kernel: invalid opcode: 0000 [#1] SMP 
Sep 10 19:12:05 localhost kernel: Modules linked in: vfat fat btrfs xfs libcrc32c uinput fcoe libfcoe libfc scsi_transport_fc zram iosf_mbi joydev virtio_balloon i2c_piix4 parport_pc parport acpi_cpufreq loop nls_utf8 isofs 8021q garp stp llc mrp virtio_console virtio_net virtio_blk virtio_rng qxl crct10dif_pclmul crc32_pclmul crc32c_intel drm_kms_helper ttm ghash_clmulni_intel drm serio_raw virtio_pci virtio_ring virtio ata_generic pata_acpi scsi_dh_rdac scsi_dh_emc scsi_dh_alua sunrpc sha256_ssse3 dm_crypt dm_round_robin linear raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor xor async_tx raid6_pq raid1 raid0 iscsi_ibft iscsi_boot_sysfs floppy iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi squashfs cramfs edd dm_multipath
Sep 10 19:12:05 localhost kernel: CPU: 0 PID: 1856 Comm: dmcrypt_write Not tainted 4.2.0-1.fc23.x86_64 #1
Sep 10 19:12:05 localhost kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.2-20150714_191134- 04/01/2014
Sep 10 19:12:05 localhost kernel: task: ffff88004fd58000 ti: ffff88004ffb0000 task.ti: ffff88004ffb0000
Sep 10 19:12:05 localhost kernel: RIP: 0010:[<ffffffffa019da5f>]  [<ffffffffa019da5f>] virtio_queue_rq+0x1ef/0x280 [virtio_blk]
Sep 10 19:12:05 localhost kernel: RSP: 0018:ffff88004ffb3b78  EFLAGS: 00010202
Sep 10 19:12:05 localhost kernel: RAX: 00000000000000ac RBX: ffff88007fb34c00 RCX: dead000000200200
Sep 10 19:12:05 localhost kernel: RDX: ffff88004ffb3bf8 RSI: ffff88004ffb3c18 RDI: ffff88007fb34c00
Sep 10 19:12:05 localhost kernel: RBP: ffff88004ffb3bc8 R08: ffff88007e2f6a80 R09: 0000000000000000
Sep 10 19:12:05 localhost kernel: R10: 0000000000001000 R11: 0000000000000000 R12: 0000000000000000
Sep 10 19:12:05 localhost kernel: R13: ffff88007e2f6a80 R14: ffff88007e2f6a80 R15: ffff88007f809840
Sep 10 19:12:05 localhost kernel: FS:  0000000000000000(0000) GS:ffff88007d000000(0000) knlGS:0000000000000000
Sep 10 19:12:05 localhost kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Sep 10 19:12:05 localhost kernel: CR2: 00005583a28cc5e8 CR3: 0000000043758000 CR4: 00000000001406f0
Sep 10 19:12:05 localhost kernel: Stack:
Sep 10 19:12:05 localhost kernel:  ffff8800794ecd58 ffffffffffffff10 ffffffff81375b98 0000000000000010
Sep 10 19:12:05 localhost kernel:  0000000000000213 ffff88007fb34c00 ffff88004ffb3bf8 0000000000000000
Sep 10 19:12:05 localhost kernel:  ffff88007e2f6a80 0000000000000000 ffff88004ffb3c68 ffffffff81379c30
Sep 10 19:12:05 localhost kernel: Call Trace:
Sep 10 19:12:05 localhost kernel:  [<ffffffff81375b98>] ? __blk_recalc_rq_segments+0xd8/0x390
Sep 10 19:12:05 localhost kernel:  [<ffffffff81379c30>] __blk_mq_run_hw_queue+0x1d0/0x370
Sep 10 19:12:05 localhost kernel:  [<ffffffff81379a41>] blk_mq_run_hw_queue+0x91/0xb0
Sep 10 19:12:05 localhost kernel:  [<ffffffff8137aecc>] blk_mq_insert_requests+0xbc/0x110
Sep 10 19:12:05 localhost kernel:  [<ffffffff8137b9e2>] blk_mq_flush_plug_list+0x132/0x160
Sep 10 19:12:05 localhost kernel:  [<ffffffff81371666>] blk_flush_plug_list+0xb6/0x220
Sep 10 19:12:05 localhost kernel:  [<ffffffff81371b34>] blk_finish_plug+0x34/0x50
Sep 10 19:12:05 localhost kernel:  [<ffffffffa0117e16>] dmcrypt_write+0x1d6/0x1f0 [dm_crypt]
Sep 10 19:12:05 localhost kernel:  [<ffffffff810c79d0>] ? wake_up_q+0x70/0x70
Sep 10 19:12:05 localhost kernel:  [<ffffffffa0117c40>] ? crypt_iv_lmk_dtr+0x60/0x60 [dm_crypt]
Sep 10 19:12:05 localhost kernel:  [<ffffffff810bc868>] kthread+0xd8/0xf0
Sep 10 19:12:05 localhost kernel:  [<ffffffff810bc790>] ? kthread_worker_fn+0x160/0x160
Sep 10 19:12:05 localhost kernel:  [<ffffffff8177809f>] ret_from_fork+0x3f/0x70
Sep 10 19:12:05 localhost kernel:  [<ffffffff810bc790>] ? kthread_worker_fn+0x160/0x160
Sep 10 19:12:05 localhost kernel: Code: ff 41 0f b7 85 f4 00 00 00 41 c7 85 78 01 00 00 08 00 00 00 49 c7 85 80 01 00 00 00 00 00 00 41 89 85 7c 01 00 00 e9 ab fe ff ff <0f> 0b 49 8b 87 b0 00 00 00 41 83 e6 ef 4a 8b 3c 20 e8 5b fa fe 
Sep 10 19:12:05 localhost kernel: RIP  [<ffffffffa019da5f>] virtio_queue_rq+0x1ef/0x280 [virtio_blk]
Sep 10 19:12:05 localhost kernel:  RSP <ffff88004ffb3b78>
Sep 10 19:12:05 localhost kernel: ---[ end trace 0de4e10e16491cc8 ]---
Comment 2 Stephen Gallagher 2015-09-10 16:07:35 EDT
I just tried doing this a second time, this time starting from the final Fedora 22 Server DVD. I followed exactly the same steps as above (validating that I was able to access the encrypted LVM partition from the F22 installer as well) before booting the F23 installer.

This time, with the setup having been created from the F22 installer, I did not hit the kernel bug. This probably reduces this from being a clear F23 blocker.

There may be two bugs here; one in the kernel causing it to not be able to read the encrypted LVM under certain circumstances and/or one in anaconda/blivet that causes it to produce an invalid configuration.

I will test a few more configurations.
Comment 3 Stephen Gallagher 2015-09-10 16:39:38 EDT
OK, so I tried reproducing the original steps (with F23 Beta TC4 creating the partition and then trying to read it again) and have been unable to replicate it.

It worked twice in a row, but now apparently never again...
Comment 4 Justin M. Forbes 2015-09-11 08:00:10 EDT
So it cannot be reproduced anymore? I will leave this open for a bit longer just in case, but I don't see much that can be done if it cannot be reproduced. Possible that something went wonky in the host?
Comment 5 Stephen Gallagher 2015-09-11 08:48:42 EDT
Nah, I'll just close it now and if I encounter it again I'll reopen this bug.

Note You need to log in before you can comment on or make changes to this bug.