Bug 1410585 - blkdeactivate does not umount software raid prior to deactivating it
Summary: blkdeactivate does not umount software raid prior to deactivating it
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: LVM and device-mapper
Classification: Community
Component: device-mapper
Version: 2.02.166
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Peter Rajnoha
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1410743
TreeView+ depends on / blocked
 
Reported: 2017-01-05 20:00 UTC by Rick Warner
Modified: 2019-08-06 00:53 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1410743 (view as bug list)
Environment:
Last Closed: 2019-08-06 00:53:58 UTC
Embargoed:
rule-engine: lvm-technical-solution?


Attachments (Terms of Use)
patches blkdeactivate to try umounting raid devices too (644 bytes, patch)
2017-01-05 20:00 UTC, Rick Warner
no flags Details | Diff

Description Rick Warner 2017-01-05 20:00:57 UTC
Created attachment 1237793 [details]
patches blkdeactivate to try umounting raid devices too

Description of problem: /usr/sbin/blkdeactivate is called during shutdown/reboot to umount and deactivate any lvm or dmraid block devices. With the update in RHEL/CentOS 7.3, it now also deactivates software raid devices.  However, the unmount function was not updated to unmount software raid devices prior to deactivating them.

This is causing an issue with the system not shutting down or rebooting when it is set up with a software RAID1 for /boot and the rest of the system with ZFS (using the zfs on linux repo).

I've included a patch to fix it.

Version-Release number of selected component (if applicable):
1.02.135-1.el7_3.1.x86_64

How reproducible:
Issue occurs every shutdown/reboot when using ZFS root.

I'd suspect this issue could also lead to corruption of filesystems on software RAID devices since it deactivates the underlying md device without unmounting it first.

Steps to Reproduce:
1. Install system with ZFS root and software raid 1 for /boot
2. Reboot or shutdown system
3.

Actual results:
system loops in shutdown with kernel errors regarding ZFS being out of memory

Expected results:
clean shutdown/reboot

Additional info:
I've diagnosed and solved the problem.  Patch is attached.

Comment 1 Peter Rajnoha 2017-01-06 10:25:09 UTC
Thanks for the diagnosis and a patch! However, I've changed the patch a bit to check the device's kernel name for "md" instead of checking "raidN" type because that can be also used for devices other than MD itself.

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=d90320f4f161658c6a004631c5685b40202af2cc

https://www.redhat.com/archives/lvm-devel/2017-January/msg00015.html


Note You need to log in before you can comment on or make changes to this bug.