Bug 1325707 - anaconda can't clear partitions if existing mdraid is in process of resync
Summary: anaconda can't clear partitions if existing mdraid is in process of resync
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: anaconda
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Brian Lane
QA Contact: Release Test Team
URL:
Whiteboard:
: 1316208 1368110 (view as bug list)
Depends On:
Blocks: 1187263
TreeView+ depends on / blocked
 
Reported: 2016-04-10 20:42 UTC by Eugene Kanter
Modified: 2019-12-16 05:37 UTC (History)
4 users (show)

Fixed In Version: python-blivet-0.61.15.48-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 23:24:13 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:2158 0 normal SHIPPED_LIVE anaconda bug fix and enhancement update 2016-11-03 13:13:55 UTC

Description Eugene Kanter 2016-04-10 20:42:02 UTC
How reproducible:
Always

Steps to Reproduce:
1. define software raid on a pair of drives, for example like in this kickstart fragment:

part raid.12 --size  1000 --grow --asprimary --ondrive=vda
part raid.22 --size  1000 --grow --asprimary --ondrive=vdb
raid pv.1    --fstype xfs --device pv.1 --level=RAID1 raid.12 raid.22
volgroup vg_system --pesize=131072 pv.1
logvol / --fstype=xfs --name=lv_root --vgname=vg_system --grow


2. start installation process and then interrupt by a reboot before resync is completed.

3. restart installation before raid resync is completed. it is easy to accomplish with large drives which require many minutes to resync

Actual results:
anaconda fails with logical volume creation error

Expected results:
raid is stopped and re-created.

Additional info:
this may be an mdadm limitation but anaconda at least should detect an error and suggest a solution, for example provide instructions for manually clearing mdraid information from partitions.

Comment 2 Jan Stodola 2016-04-12 12:25:51 UTC
This is what I got when reproducing the issue:

anaconda 21.48.22.56-1 exception report
Traceback (most recent call first):
  File "/usr/lib/python2.7/site-packages/blivet/formats/__init__.py", line 405, in destroy
    raise FormatDestroyError(msg)
  File "/usr/lib/python2.7/site-packages/blivet/deviceaction.py", line 651, in execute
    self.format.destroy()
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 377, in processActions
    action.execute(callbacks)
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 374, in doIt
    self.devicetree.processActions(callbacks)
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 224, in turnOnFilesystems
    storage.doIt(callbacks)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/install.py", line 186, in doInstall
    turnOnFilesystems(storage, mountOnly=flags.flags.dirInstall, callbacks=callbacks_reg)
  File "/usr/lib64/python2.7/threading.py", line 764, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run
    threading.Thread.run(self, *args, **kwargs)
FormatDestroyError: error wiping old signatures from /dev/mapper/vg_system-lv_root: 1

Partitioning layout in ks:

bootloader --location=mbr
zerombr
clearpart --all --initlabel
part /boot --size 500 --fstype=xfs
part raid.12 --size 4000 --grow --asprimary --ondrive=vda
part raid.22 --size 4000 --grow --asprimary --ondrive=vdb
raid pv.1    --fstype xfs --device pv.1 --level=RAID1 raid.12 raid.22
volgroup vg_system --pesize=131072 pv.1
logvol / --fstype=xfs --name=lv_root --vgname=vg_system --grow --size=2000

Reproduced in a virtual machine (with write speed limit for disks set to 6144 kB)

Comment 4 Brian Lane 2016-05-27 22:03:46 UTC
It appears that a resyncing RAID introduces enough delay in the appearance of the LV on top of it that we need to wait for the LV's device node to appear:

https://github.com/rhinstaller/blivet/pull/431

Comment 6 David Shea 2016-07-06 17:21:29 UTC
*** Bug 1316208 has been marked as a duplicate of this bug. ***

Comment 8 Shijoe Panjikkaran 2016-08-24 11:43:26 UTC
*** Bug 1368110 has been marked as a duplicate of this bug. ***

Comment 10 errata-xmlrpc 2016-11-03 23:24:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-2158.html


Note You need to log in before you can comment on or make changes to this bug.