Bug 1410585
Summary: | blkdeactivate does not umount software raid prior to deactivating it | ||||||
---|---|---|---|---|---|---|---|
Product: | [Community] LVM and device-mapper | Reporter: | Rick Warner <rick> | ||||
Component: | device-mapper | Assignee: | Peter Rajnoha <prajnoha> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | cluster-qe <cluster-qe> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 2.02.166 | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, zkabelac | ||||
Target Milestone: | --- | Flags: | rule-engine:
lvm-technical-solution?
|
||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1410743 (view as bug list) | Environment: | |||||
Last Closed: | 2019-08-06 00:53:58 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1410743 | ||||||
Attachments: |
|
Thanks for the diagnosis and a patch! However, I've changed the patch a bit to check the device's kernel name for "md" instead of checking "raidN" type because that can be also used for devices other than MD itself. https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=d90320f4f161658c6a004631c5685b40202af2cc https://www.redhat.com/archives/lvm-devel/2017-January/msg00015.html |
Created attachment 1237793 [details] patches blkdeactivate to try umounting raid devices too Description of problem: /usr/sbin/blkdeactivate is called during shutdown/reboot to umount and deactivate any lvm or dmraid block devices. With the update in RHEL/CentOS 7.3, it now also deactivates software raid devices. However, the unmount function was not updated to unmount software raid devices prior to deactivating them. This is causing an issue with the system not shutting down or rebooting when it is set up with a software RAID1 for /boot and the rest of the system with ZFS (using the zfs on linux repo). I've included a patch to fix it. Version-Release number of selected component (if applicable): 1.02.135-1.el7_3.1.x86_64 How reproducible: Issue occurs every shutdown/reboot when using ZFS root. I'd suspect this issue could also lead to corruption of filesystems on software RAID devices since it deactivates the underlying md device without unmounting it first. Steps to Reproduce: 1. Install system with ZFS root and software raid 1 for /boot 2. Reboot or shutdown system 3. Actual results: system loops in shutdown with kernel errors regarding ZFS being out of memory Expected results: clean shutdown/reboot Additional info: I've diagnosed and solved the problem. Patch is attached.