RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2149307 - mdadm: Couldn't open /dev/vda3 for write - not zeroing
Summary: mdadm: Couldn't open /dev/vda3 for write - not zeroing
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: mdadm
Version: 8.8
Hardware: All
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: XiaoNi
QA Contact: Fine Fan
URL:
Whiteboard:
: 2155971 (view as bug list)
Depends On:
Blocks: 2129764 2144443
TreeView+ depends on / blocked
 
Reported: 2022-11-29 14:33 UTC by Jan Stodola
Modified: 2023-05-16 11:15 UTC (History)
6 users (show)

Fixed In Version: mdadm-4.2-7.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-05-16 09:09:23 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Installer logs (103.94 KB, application/octet-stream)
2023-01-04 10:00 UTC, IBM Bug Proxy
no flags Details
Screenshot of 'Report Bug' (94.61 KB, application/octet-stream)
2023-01-04 10:00 UTC, IBM Bug Proxy
no flags Details
Updated installer logs (206.62 KB, application/octet-stream)
2023-01-04 10:00 UTC, IBM Bug Proxy
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-140794 0 None None None 2022-11-29 14:41:29 UTC
Red Hat Product Errata RHBA-2023:2998 0 None None None 2023-05-16 09:09:34 UTC

Description Jan Stodola 2022-11-29 14:33:39 UTC
This bug was initially created as a copy of Bug #2149292

I am copying this bug because: 
The problem can be reproduced also on RHEL-8.8.0-20221128.0 with mdadm-4.2-6.el8.



Description of problem:
Installation fails with a traceback when reinstalling a system with a RAID device from a previous installation:

...
INFO:program:Running [53] mdadm --zero-superblock /dev/vda3 ...
INFO:program:stdout[53]: 
INFO:program:stderr[53]: mdadm: Couldn't open /dev/vda3 for write - not zeroing

INFO:program:...done [53] (exit code: 2)
INFO:anaconda.threading:Thread Failed: AnaTaskThread-CreateStorageLayoutTask-1 (140220576028224)
ERROR:anaconda.modules.common.task.task:Thread AnaTaskThread-CreateStorageLayoutTask-1 has failed: Traceback (most recent call last):
  File "/usr/lib64/python3.9/site-packages/gi/overrides/BlockDev.py", line 1093, in wrapped
    ret = orig_obj(*args, **kwargs)
gi.repository.GLib.GError: g-bd-utils-exec-error-quark: Process reported exit code 2: mdadm: Couldn't open /dev/vda3 for write - not zeroing
 (0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.9/site-packages/pyanaconda/threading.py", line 275, in run
    threading.Thread.run(self)
  File "/usr/lib64/python3.9/threading.py", line 917, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/common/task/task.py", line 96, in _thread_run_callback
    self._task_run_callback()
  File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/common/task/task.py", line 109, in _task_run_callback
    self._set_result(self.run())
  File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/storage/installation.py", line 86, in run
    self._turn_on_filesystems(
  File "/usr/lib64/python3.9/site-packages/pyanaconda/modules/storage/installation.py", line 166, in _turn_on_filesystems
    storage.do_it(callbacks)
  File "/usr/lib/python3.9/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/blivet.py", line 115, in do_it
    self.devicetree.actions.process(callbacks=callbacks, devices=self.devices)
  File "/usr/lib/python3.9/site-packages/blivet/actionlist.py", line 47, in wrapped_func
    return func(obj, *args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/actionlist.py", line 284, in process
    action.execute(callbacks)
  File "/usr/lib/python3.9/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/deviceaction.py", line 760, in execute
    self.format.destroy()
  File "/usr/lib/python3.9/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/formats/__init__.py", line 553, in destroy
    self._destroy(**kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/blivet/formats/mdraid.py", line 92, in _destroy
    blockdev.md.destroy(self.device)
  File "/usr/lib64/python3.9/site-packages/gi/overrides/BlockDev.py", line 1115, in wrapped
    raise transform[1](msg)
gi.overrides.BlockDev.MDRaidError: Process reported exit code 2: mdadm: Couldn't open /dev/vda3 for write - not zeroing


Version-Release number of selected component (if applicable):
RHEL-9.2.0-20221128.3
python-blivet-3.6.0-3.el9

How reproducible:
Always

Steps to Reproduce:
1. Have a system with rootfs on RAID1, an example kickstart partitioning:

clearpart --all --initlabel
part /boot --asprimary --size=500 --label=boot
part swap --fstype=swap --recommended --label=swap
part raid.01 --size=8000
part raid.02 --size=8000
raid / --device=0 --level=RAID1 raid.01 raid.02

2. Try to re-install the system using autopart:

clearpart --all --initlabel
autopart


Actual results:
Anaconda traceback

Expected results:
RAID removed and new partitioning successfully created.

Comment 2 XiaoNi 2022-12-05 13:51:34 UTC
Hi Jan

Could you have a try with https://people.redhat.com/xni/mdadm-4.2-7.el8.x86_64.rpm

Thanks
Xiao

Comment 3 Jan Stodola 2022-12-05 17:00:15 UTC
Created new installation images from compose RHEL-8.8.0-20221204.2 + mdadm-4.2-7.el8.x86_64.rpm, but the problem persists.

Comment 4 XiaoNi 2022-12-06 07:45:16 UTC
Hi Jan

Thanks for the check.

There are some packages that have been updated. Could you have a try with
RHEL-8.8.0-20221204.2 + mdadm-4.2-5.el8.x86_64.rpm (http://download.eng.bos.redhat.com/brewroot/packages/mdadm/4.2/5.el8/x86_64/mdadm-4.2-5.el8.x86_64.rpm)

If it can't be reproduced, we can make sure it's a regression problem
which is introduced by mdadm-4.2-6

Comment 5 XiaoNi 2022-12-06 07:50:39 UTC
        
The messages is printed by those codes

        fd = open(dev, O_RDWR|(noexcl ? 0 : O_EXCL));
        if (fd < 0) {
                if (verbose >= 0)
                        pr_err("Couldn't open %s for write - not zeroing\n",
                                dev);
                return 2;
        }

So not sure why it can't open /dev/vda3 but it can open /dev/vda2

Comment 6 XiaoNi 2022-12-06 08:17:07 UTC
Hi Jan

Could you give your ks.cfg, so I can try to reproduce myself.

I did a test like this:

sdb                              8:16   0 279.4G  0 disk
|-sdb1                           8:17   0    10G  0 part
`-sdb2                           8:18   0    10G  0 part
[root@dell-per930-01 ~]# mdadm -CR /dev/md0 -l1 -n2 /dev/sdb1 /dev/sdb2 --assume-clean
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@dell-per930-01 ~]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
[root@dell-per930-01 ~]# mdadm --zero-superblock /dev/sdb1
[root@dell-per930-01 ~]# mdadm --zero-superblock /dev/sdb2
[root@dell-per930-01 ~]# mdadm -E /dev/sdb1
mdadm: No md superblock detected on /dev/sdb1.
[root@dell-per930-01 ~]# mdadm -E /dev/sdb2
mdadm: No md superblock detected on /dev/sdb2.

In this way, the problem can't be reproduced.

Comment 9 Jan Stodola 2022-12-07 17:11:14 UTC
Reassigning to blivet, the installer storage library.

Comment 25 Jan Stodola 2023-01-04 09:18:12 UTC
*** Bug 2155971 has been marked as a duplicate of this bug. ***

Comment 26 IBM Bug Proxy 2023-01-04 10:00:51 UTC
Created attachment 1935691 [details]
Installer logs

Comment 27 IBM Bug Proxy 2023-01-04 10:00:52 UTC
Created attachment 1935692 [details]
Screenshot of 'Report Bug'

Comment 28 IBM Bug Proxy 2023-01-04 10:00:54 UTC
Created attachment 1935693 [details]
Updated installer logs

Comment 31 Fine Fan 2023-01-19 09:23:00 UTC
mdadm-4.2-7.el8 has passed the sanity test.

Comment 35 IBM Bug Proxy 2023-02-10 04:50:54 UTC
------- Comment From Akanksha.J.N 2023-02-09 23:49 EDT-------
The issue is not reproducible and I was able to proceed with the installation.

# uname -r
4.18.0-452.el8.ppc64le
# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  105G  0 disk
??sda1        8:1    0    4M  0 part
??sda2        8:2    0    1G  0 part  /boot
??sda3        8:3    0   22G  0 part
? ??md126     9:126  0   22G  0 raid1 /home
??sda4        8:4    0    1K  0 part
??sda5        8:5    0  8.8G  0 part
? ??md127     9:127  0  8.8G  0 raid1 /
??sda6        8:6    0   12G  0 part  [SWAP]
sdb           8:16   1 14.5G  0 disk
??sdb1        8:17   1    2G  0 part
nvme0n1     259:0    0  2.9T  0 disk
??nvme0n1p1 259:1    0  176G  0 part
? ??md126     9:126  0   22G  0 raid1 /home
??nvme0n1p2 259:2    0 70.1G  0 part
??md127     9:127  0  8.8G  0 raid1 /

Comment 37 errata-xmlrpc 2023-05-16 09:09:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (mdadm bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2998


Note You need to log in before you can comment on or make changes to this bug.