RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1208269 - inconsistent attributes of devices in read only raid volumes
Summary: inconsistent attributes of devices in read only raid volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: pre-dev-freeze
: 7.6
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-01 19:31 UTC by Corey Marthaler
Modified: 2021-09-03 12:39 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.178-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 11:02:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3193 0 None None None 2018-10-30 11:03:23 UTC

Description Corey Marthaler 2015-04-01 19:31:36 UTC
Description of problem:
Here are three different possible permission states of RO raid (top and sub level) volumes.


# 1. Simple RO creation (top vol=r, sub vols=w)
 
[root@host-111 ~]# lvcreate -pr -m 1 -n raid1 -L 100M test 
  WARNING: Logical volume test/raid1 not zeroed.
  Logical volume "raid1" created.
[root@host-111 ~]# lvs -a -o +devices
  LV               Attr       LSize   Cpy%Sync Devices
  raid1            rri-a-r--- 100.00m 100.00   raid1_rimage_0(0),raid1_rimage_1(0)
  [raid1_rimage_0] iwi-aor--- 100.00m          /dev/sda1(1)
  [raid1_rimage_1] iwi-aor--- 100.00m          /dev/sdb1(1)
  [raid1_rmeta_0]  ewi-aor---   4.00m          /dev/sda1(0)
  [raid1_rmeta_1]  ewi-aor---   4.00m          /dev/sdb1(0)




# 2. RW creation but with tags matching existing read_only_volume_list (top vol=R, sub vols=R) 

#     read_only_volume_list = [ "@RO" ]
[root@host-111 ~]# lvcreate -m 1 -n raid2 -L 100M --addtag RO test
  /dev/test/raid2: write failed after 0 of 4096 at 0: Operation not permitted
  Logical volume "raid2" created.
[root@host-111 ~]# lvs -a -o +devices
  LV               Attr       LSize   Cpy%Sync Devices
  raid2            rRi-a-r--- 100.00m 100.00   raid2_rimage_0(0),raid2_rimage_1(0)
  [raid2_rimage_0] iRi-aor--- 100.00m          /dev/sda1(1)
  [raid2_rimage_1] iRi-aor--- 100.00m          /dev/sdb1(1)
  [raid2_rmeta_0]  eRi-aor---   4.00m          /dev/sda1(0)
  [raid2_rmeta_1]  eRi-aor---   4.00m          /dev/sdb1(0)




# 3. RO creation with tags matching existing read_only_volume_list (top vol=r, sub vols=R) 

#     read_only_volume_list = [ "@RO" ]
[root@host-111 ~]# lvcreate -pr -m 1 -n raid3 -L 100M --addtag RO test
  WARNING: Logical volume test/raid3 not zeroed.
  Logical volume "raid3" created.
[root@host-111 ~]# lvs -a -o +devices
  LV               Attr       LSize   Cpy%Sync Devices
  raid3            rri-a-r--- 100.00m 100.00   raid3_rimage_0(0),raid3_rimage_1(0)
  [raid3_rimage_0] iRi-aor--- 100.00m          /dev/sda1(1)
  [raid3_rimage_1] iRi-aor--- 100.00m          /dev/sdb1(1)
  [raid3_rmeta_0]  eRi-aor---   4.00m          /dev/sda1(0)
  [raid3_rmeta_1]  eRi-aor---   4.00m          /dev/sdb1(0)



Version-Release number of selected component (if applicable):
2.6.32-546.el6.x86_64
lvm2-2.02.118-1.el6    BUILT: Tue Mar 24 08:25:21 CDT 2015
lvm2-libs-2.02.118-1.el6    BUILT: Tue Mar 24 08:25:21 CDT 2015
lvm2-cluster-2.02.118-1.el6    BUILT: Tue Mar 24 08:25:21 CDT 2015
udev-147-2.61.el6    BUILT: Mon Mar  2 05:08:11 CST 2015
device-mapper-1.02.95-1.el6    BUILT: Tue Mar 24 08:25:21 CDT 2015
device-mapper-libs-1.02.95-1.el6    BUILT: Tue Mar 24 08:25:21 CDT 2015
device-mapper-event-1.02.95-1.el6    BUILT: Tue Mar 24 08:25:21 CDT 2015
device-mapper-event-libs-1.02.95-1.el6    BUILT: Tue Mar 24 08:25:21 CDT 2015
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 08:43:06 CDT 2014
cmirror-2.02.118-1.el6    BUILT: Tue Mar 24 08:25:21 CDT 2015

Comment 2 Zdenek Kabelac 2015-06-17 08:35:07 UTC
Not sure what is the point of having read-only rmeta LV.

Comment 5 Heinz Mauelshagen 2018-03-12 21:32:21 UTC
SubLVs of a read-only RaidLV have to be writable to allow for metadata updates on superblocks/bitmaps and for resyncronization/recovery of data images as well.

Upstream commit 0646fd465eef51f3cfbd2219a0992af27d61ae14

Comment 6 Jonathan Earl Brassow 2018-03-15 20:01:21 UTC
(In reply to Heinz Mauelshagen from comment #5)
> SubLVs of a read-only RaidLV have to be writable to allow for metadata
> updates on superblocks/bitmaps and for resyncronization/recovery of data
> images as well.
> 
> Upstream commit 0646fd465eef51f3cfbd2219a0992af27d61ae14

... so, the upstream patch ensures that you will consistently get 'w' for your sub-LVs.

(right Heinz?)

Comment 8 Heinz Mauelshagen 2018-03-28 23:21:17 UTC
(In reply to Jonathan Earl Brassow from comment #6)
> (In reply to Heinz Mauelshagen from comment #5)
> > SubLVs of a read-only RaidLV have to be writable to allow for metadata
> > updates on superblocks/bitmaps and for resyncronization/recovery of data
> > images as well.
> > 
> > Upstream commit 0646fd465eef51f3cfbd2219a0992af27d61ae14
> 
> ... so, the upstream patch ensures that you will consistently get 'w' for
> your sub-LVs.
> 
> (right Heinz?)

Correct.

Comment 13 Roman Bednář 2018-08-06 09:41:13 UTC
Verified. Raid subLV permission attributes are now consistent (all have 'w' flag) regardless of it's top level volume permissions.


# 1. Simple RO creation
 
[root@virt-371 ~]# lvcreate -pr -m 1 -n raid1 -L 100M vg
  WARNING: Logical volume vg/raid1 not zeroed.
  Logical volume "raid1" created.

[root@virt-371 ~]# lvs -a -o lv_name,lv_attr
  LV               Attr      
  root             -wi-ao----
  swap             -wi-ao----
  raid1            rri-a-r---
  [raid1_rimage_0] iwi-aor---
  [raid1_rimage_1] iwi-aor---
  [raid1_rmeta_0]  ewi-aor---
  [raid1_rmeta_1]  ewi-aor---




# 2. RW creation but with tags matching existing read_only_volume_list

#     read_only_volume_list = [ "@RO" ]

[root@virt-371 ~]# lvcreate -m 1 -n raid2 -L 100M --addtag RO vg
  Error writing device /dev/vg/raid2 at 0 length 4096.
  Logical volume "raid2" created.

[root@virt-371 ~]# lvs -a -o lv_name,lv_attr
  LV               Attr      
  root             -wi-ao----
  swap             -wi-ao----
  raid2            rRi-a-r---
  [raid2_rimage_0] iwi-aor---
  [raid2_rimage_1] iwi-aor---
  [raid2_rmeta_0]  ewi-aor---
  [raid2_rmeta_1]  ewi-aor---




# 3. RO creation with tags matching existing read_only_volume_list (top vol=r, sub vols=R) 

#     read_only_volume_list = [ "@RO" ]

[root@virt-371 ~]# lvcreate -pr -m 1 -n raid3 -L 100M --addtag RO vg
  WARNING: Logical volume vg/raid3 not zeroed.
  Logical volume "raid3" created.

[root@virt-371 ~]# lvs -a -o lv_name,lv_attr
  LV               Attr      
  root             -wi-ao----
  swap             -wi-ao----
  raid3            rri-a-r---
  [raid3_rimage_0] iwi-aor---
  [raid3_rimage_1] iwi-aor---
  [raid3_rmeta_0]  ewi-aor---
  [raid3_rmeta_1]  ewi-aor---


3.10.0-926.el7.x86_64

lvm2-2.02.180-1.el7    BUILT: Fri Jul 20 19:21:35 CEST 2018

Comment 15 errata-xmlrpc 2018-10-30 11:02:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3193


Note You need to log in before you can comment on or make changes to this bug.