RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2149292 - mdadm: Couldn't open /dev/vda3 for write - not zeroing
Summary: mdadm: Couldn't open /dev/vda3 for write - not zeroing
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: mdadm
Version: 9.2
Hardware: All
OS: Linux
unspecified
high
Target Milestone: beta
: ---
Assignee: XiaoNi
QA Contact: Fine Fan
URL:
Whiteboard:
Depends On:
Blocks: 2129768 2144442
TreeView+ depends on / blocked
 
Reported: 2022-11-29 13:23 UTC by Jan Stodola
Modified: 2023-05-09 10:28 UTC (History)
2 users (show)

Fixed In Version: mdadm-4.2-8.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-05-09 08:19:45 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-140788 0 None None None 2022-11-29 13:39:06 UTC
Red Hat Product Errata RHBA-2023:2513 0 None None None 2023-05-09 08:19:55 UTC

Description Jan Stodola 2022-11-29 13:23:01 UTC
Description of problem:
Installation fails with a traceback when reinstalling a system with a RAID device from a previous installation:

...
INFO:program:Running [53] mdadm --zero-superblock /dev/vda3 ...
INFO:program:stdout[53]: 
INFO:program:stderr[53]: mdadm: Couldn't open /dev/vda3 for write - not zeroing

INFO:program:...done [53] (exit code: 2)
INFO:anaconda.threading:Thread Failed: AnaTaskThread-CreateStorageLayoutTask-1 (140220576028224)
ERROR:anaconda.modules.common.task.task:Thread AnaTaskThread-CreateStorageLayoutTask-1 has failed: Traceback (most recent call last):
  File "/usr/lib64/python3.9/site-packages/gi/overrides/BlockDev.py", line 1093, in wrapped
    ret = orig_obj(*args, **kwargs)
gi.repository.GLib.GError: g-bd-utils-exec-error-quark: Process reported exit code 2: mdadm: Couldn't open /dev/vda3 for write - not zeroing
 (0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.9/site-packages/pyanaconda/threading.py", line 275, in run
    threading.Thread.run(self)
  File "/usr/lib64/python3.9/threading.py", line 917, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/common/task/task.py", line 96, in _thread_run_callback
    self._task_run_callback()
  File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/common/task/task.py", line 109, in _task_run_callback
    self._set_result(self.run())
  File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/storage/installation.py", line 86, in run
    self._turn_on_filesystems(
  File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/storage/installation.py", line 166, in _turn_on_filesystems
    storage.do_it(callbacks)
  File "/usr/lib/python3.9/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/blivet.py", line 115, in do_it
    self.devicetree.actions.process(callbacks=callbacks, devices=self.devices)
  File "/usr/lib/python3.9/site-packages/blivet/actionlist.py", line 47, in wrapped_func
    return func(obj, *args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/actionlist.py", line 284, in process
    action.execute(callbacks)
  File "/usr/lib/python3.9/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/deviceaction.py", line 760, in execute
    self.format.destroy()
  File "/usr/lib/python3.9/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/formats/__init__.py", line 553, in destroy
    self._destroy(**kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/formats/mdraid.py", line 92, in _destroy
    blockdev.md.destroy(self.device)
  File "/usr/lib64/python3.9/site-packages/gi/overrides/BlockDev.py", line 1115, in wrapped
    raise transform[1](msg)
gi.overrides.BlockDev.MDRaidError: Process reported exit code 2: mdadm: Couldn't open /dev/vda3 for write - not zeroing


Version-Release number of selected component (if applicable):
RHEL-9.2.0-20221128.3
python-blivet-3.6.0-3.el9

How reproducible:
Always

Steps to Reproduce:
1. Have a system with rootfs on RAID1, an example kickstart partitioning:

clearpart --all --initlabel
part /boot --asprimary --size=500 --label=boot
part swap --fstype=swap --recommended --label=swap
part raid.01 --size=8000
part raid.02 --size=8000
raid / --device=0 --level=RAID1 raid.01 raid.02

2. Try to re-install the system using autopart:

clearpart --all --initlabel
autopart


Actual results:
Anaconda traceback

Expected results:
RAID removed and new partitioning successfully created.

Comment 3 Jan Stodola 2022-11-29 14:35:22 UTC
Also reproduced on RHEL-8.8 with mdadm-4.2-6.el8, reported as bug 2149307.

Comment 4 Fine Fan 2023-01-19 09:26:53 UTC
mdadm-4.2-8.el9 has pass the sanity test.

Comment 10 errata-xmlrpc 2023-05-09 08:19:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (mdadm bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2513


Note You need to log in before you can comment on or make changes to this bug.