Bug 1914389

Summary: creation of mirror|raid when read_only_volume_list is present no longer works
Product: Red Hat Enterprise Linux 8 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Mirroring and RAID QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: high CC: agk, heinzm, jbrassow, lvm-team, mcsontos, msnitzer, prajnoha, zkabelac
Version: 8.4Keywords: Regression
Target Milestone: rc   
Target Release: 8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.03.11-3.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-05-18 15:02:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
verbose lvcreate attempt none

Description Corey Marthaler 2021-01-08 17:36:24 UTC
Description of problem:
This regression block quite a few regression test cases for the read_only_volume_list configuration in the lvm.conf file.

I wonder if this doesn't have something to do with all the wipe signature behavior changes that slipped into the rhel8.4 build?



# RHEL8.3

kernel-4.18.0-240.el8    BUILT: Wed Sep 23 04:46:11 CDT 2020
lvm2-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020
lvm2-libs-2.03.09-5.el8    BUILT: Wed Aug 12 15:51:50 CDT 2020

[root@host-073 ~]# grep read_only_volume_list /etc/lvm/lvm.conf 
        # Configuration option activation/read_only_volume_list.
        read_only_volume_list = [ "@RO" ]  # edited by QA test script qe_lvmconf (on Fri Jan  8 11:24:17 CST 2021)!

[root@host-073 ~]# vgs
  VG            #PV #LV #SN Attr   VSize    VFree   
  raid_sanity     8   0   0 wz--n- <239.94g <239.94g

[root@host-073 ~]# lvcreate --yes  --type raid1 -n kern_perm -L 300M --addtag RO raid_sanity
  Error writing device /dev/raid_sanity/kern_perm at 0 length 4096.
  bcache_invalidate: block (3, 0) still dirty
  Logical volume "kern_perm" created.

[root@host-073 ~]# lvs -a -o +devices
  LV                   VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                    
  kern_perm            raid_sanity   rRi-a-r--- 300.00m                                      100.00           kern_perm_rimage_0(0),kern_perm_rimage_1(0)
  [kern_perm_rimage_0] raid_sanity   iwi-aor--- 300.00m                                                       /dev/sda(1)                                
  [kern_perm_rimage_1] raid_sanity   iwi-aor--- 300.00m                                                       /dev/sdb(1)                                
  [kern_perm_rmeta_0]  raid_sanity   ewi-aor---   4.00m                                                       /dev/sda(0)                                
  [kern_perm_rmeta_1]  raid_sanity   ewi-aor---   4.00m                                                       /dev/sdb(0)                                




# RHEL8.4

kernel-4.18.0-268.el8    BUILT: Mon Dec 28 03:48:25 CST 2020
lvm2-2.03.11-0.4.20201222gitb84a992.el8    BUILT: Tue Dec 22 06:33:49 CST 2020
lvm2-libs-2.03.11-0.4.20201222gitb84a992.el8    BUILT: Tue Dec 22 06:33:49 CST 2020

[root@host-093 ~]# grep read_only_volume_list /etc/lvm/lvm.conf
        # Configuration option activation/read_only_volume_list.
        read_only_volume_list = [ "@RO" ]  # edited by QA test script qe_lvmconf (on Thu Jan  7 16:13:14 CST 2021)!

[root@host-093 ~]# vgs
  VG            #PV #LV #SN Attr   VSize    VFree   
  raid_sanity     8   0   0 wz--n- <239.94g <239.94g

[root@host-093 ~]# lvcreate --yes  --type raid1 -n kern_perm -L 300M --addtag RO raid_sanity
  Failed to initialize logical volume raid_sanity/kern_perm at position 0 and size 4096.
  Aborting. Failed to wipe start of new LV.
[root@host-093 ~]# echo $?
5


 

How reproducible:
Everytime

Comment 3 Corey Marthaler 2021-01-08 17:42:28 UTC
Created attachment 1745662 [details]
verbose lvcreate attempt

Comment 4 Corey Marthaler 2021-01-08 20:25:28 UTC
This exists as far back in the original 8.4 lvm build.

lvm2-2.03.11-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020
lvm2-libs-2.03.11-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020

[root@host-093 ~]# lvcreate --yes  --type raid1 -n kern_perm -L 300M --addtag RO raid_sanity
  Failed to initialize logical volume raid_sanity/kern_perm at position 0 and size 4096.
  Aborting. Failed to wipe start of new LV.

Comment 5 Corey Marthaler 2021-01-19 21:54:58 UTC
FWIW this also affects the exact same mirror test scenario as well.

[root@hayes-01 ~]# grep QA /etc/lvm/lvm.conf 
    read_only_volume_list = [ "@RO" ]  # edited by QA test script qe_lvmconf (on Tue Jan 19 14:26:02 CST 2021)!

[root@hayes-01 ~]# lvcreate --yes  --type mirror -n kern_perm -L 300M --addtag RO  --corelog mirror_sanity
  Failed to initialize logical volume mirror_sanity/kern_perm at position 0 and size 4096.
  Aborting. Failed to wipe start of new LV.

Comment 6 Zdenek Kabelac 2021-01-27 15:34:30 UTC
Is this case working when you add '-Zn -Wn' options.

Comment 7 Zdenek Kabelac 2021-01-27 15:48:59 UTC
As for development - we have couple options -  ATM not clean which one would be preffered:


1.)  basically disable 'creation' of read-only volumes without -Zn
   however we already support   'lvcreate -pr -L10 vg'  with:
   WARNING: Logical volume vg/lvol0 not zeroed

2.) try to detect ahead that 'activation' will result into 'read-only' activation
   as it will have 'tags' and matches  'read_only_volume_list'
   then we can match the case to the 'already' working case mentioned in 1.)

3.) activate LV without tags locally - wipe - deactivate - activate with tags

My vote would probably be about the solution 2. - although it's not clear how simple it could be on some strange test scenarios.

Comment 8 Corey Marthaler 2021-01-27 16:19:58 UTC
Adding the '-Zn' allows for the creation. The '-Wn' doesn't help. I'll update the test scenarios with -Zn and remove the TESTBLOCKER flag.

[root@host-084 ~]# lvcreate --yes  --type raid1 -n kern_perm -L 300M --addtag RO -Zn raid_sanity
  WARNING: Logical volume raid_sanity/kern_perm not zeroed.
  Logical volume "kern_perm" created.

kernel-4.18.0-277.el8    BUILT: Wed Jan 20 09:06:28 CST 2021
lvm2-2.03.11-1.el8    BUILT: Fri Jan  8 05:21:07 CST 2021
lvm2-libs-2.03.11-1.el8    BUILT: Fri Jan  8 05:21:07 CST 2021

Comment 9 Zdenek Kabelac 2021-02-02 20:27:53 UTC
Pushed fix that checks if the activated LV isn't going to pass read_only_volume_list filter
and skips zeroing (with warning about not zeroed LV) 

Basically matching '-pr' creation:

https://www.redhat.com/archives/lvm-devel/2021-February/msg00032.html

tested by:

https://www.redhat.com/archives/lvm-devel/2021-February/msg00033.html

Comment 10 Corey Marthaler 2021-02-09 17:46:46 UTC
The non -Zn create behavior has been restored. Marking Verified:Tested

kernel-4.18.0-283.el8    BUILT: Thu Feb  4 05:30:59 CST 2021
lvm2-2.03.11-3.el8    BUILT: Wed Feb  3 10:03:22 CST 2021
lvm2-libs-2.03.11-3.el8    BUILT: Wed Feb  3 10:03:22 CST 2021


[root@host-084 ~]# grep read_only_volume_list /etc/lvm/lvm.conf 
        # Configuration option activation/read_only_volume_list.
        read_only_volume_list = [ "@RO" ]  # edited by QA test script qe_lvmconf (on Tue Feb  9 11:43:25 CST 2021)!

[root@host-084 ~]# vgs
  VG            #PV #LV #SN Attr   VSize    VFree   
  raid_sanity     7   0   0 wz--n- <139.95g <139.95g

[root@host-084 ~]# lvcreate --yes  --type raid1 -n kern_perm -L 300M --addtag RO raid_sanity
  WARNING: Read-only activated logical volume raid_sanity/kern_perm not zeroed.
  Logical volume "kern_perm" created.

[root@host-084 ~]# lvs -a -o +devices
  LV                   VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                    
  kern_perm            raid_sanity   rRi-a-r--- 300.00m                                      100.00           kern_perm_rimage_0(0),kern_perm_rimage_1(0)
  [kern_perm_rimage_0] raid_sanity   iwi-aor--- 300.00m                                                       /dev/sdb(1)                                
  [kern_perm_rimage_1] raid_sanity   iwi-aor--- 300.00m                                                       /dev/sdc(1)                                
  [kern_perm_rmeta_0]  raid_sanity   ewi-aor---   4.00m                                                       /dev/sdb(0)                                
  [kern_perm_rmeta_1]  raid_sanity   ewi-aor---   4.00m                                                       /dev/sdc(0)

Comment 13 Corey Marthaler 2021-02-11 21:19:59 UTC
Verified in the latest build.

kernel-4.18.0-284.el8    BUILT: Mon Feb  8 04:33:33 CST 2021
lvm2-2.03.11-4.el8    BUILT: Thu Feb 11 04:35:23 CST 2021
lvm2-libs-2.03.11-4.el8    BUILT: Thu Feb 11 04:35:23 CST 2021
lvm2-dbusd-2.03.11-3.el8    BUILT: Wed Feb  3 10:03:11 CST 2021
lvm2-lockd-2.03.11-4.el8    BUILT: Thu Feb 11 04:35:23 CST 2021
boom-boot-1.3-1.el8    BUILT: Sat Jan 30 02:31:18 CST 2021
device-mapper-1.02.175-4.el8    BUILT: Thu Feb 11 04:35:23 CST 2021
device-mapper-libs-1.02.175-4.el8    BUILT: Thu Feb 11 04:35:23 CST 2021
device-mapper-event-1.02.175-4.el8    BUILT: Thu Feb 11 04:35:23 CST 2021
device-mapper-event-libs-1.02.175-4.el8    BUILT: Thu Feb 11 04:35:23 CST 2021


SCENARIO - [kern_perm_change_of_mirror_to_rw_after_tag_removal]
Verify in-kernel permission changes are possible when the metadata setting is different
Create a mirror with tags that match what is present in the read_only_volume_list and then change the in-kernel permissions
lvcreate --yes  --type mirror -n kern_perm -L 300M --addtag RO  --corelog mirror_sanity
        kern_perm:mRi-a-m---
        kern_perm_mimage_0:IRi-aom---
        kern_perm_mimage_1:iRi-aom---


SCENARIO (raid1) - [kern_perm_change_of_raid_to_rw_after_tag_removal]
Verify in-kernel permission changes are possible when the metadata setting is different
Create a raid with tags that match what is present in the read_only_volume_list (WITH -Zn) and then change the in-kernel permissions
lvcreate --yes  --type raid1 -n kern_perm1 -L 300M --addtag RO -Zn raid_sanity
  WARNING: Read-only activated logical volume raid_sanity/kern_perm1 not zeroed.
        kern_perm1:rRi-a-r---

Create a raid with tags that match what is present in the read_only_volume_list (withOUT -Zn) and then change the in-kernel permissions
lvcreate --yes  --type raid1 -n kern_perm2 -L 300M --addtag RO raid_sanity
  WARNING: Read-only activated logical volume raid_sanity/kern_perm2 not zeroed.
        kern_perm2:rRi-a-r---

Comment 15 errata-xmlrpc 2021-05-18 15:02:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1659