Hide Forgot
1) If an LV is activated read-only (using read_only_volume_list), and then that setting is cleared, it would be useful if lvchange -prw would make it rw in the kernel as an alternative to lvchange --refresh. 2) If for any other reason an LV happens to be read-only in the kernel, but its metadata says it should be read-write, it would be useful if lvchange -prw would make it rw in the kernel. Requested for RHEV's use.
Create LV (writeable as normal) inactive. Set read_only_volume_list in config to activate it read-only. Activate it. lvs shows R: lvol0 vg6 -Ri-a----- 12.00m Remove the read_only_volume_list setting. run lvchange -prw with current code: Logical volume "lvol0" is already writable Run with new code: Logical volume "lvol0" is already writable. Refreshing kernel state. Logical volume "lvol0" changed. Side-effect: The same happens repeatedly if you still have read_only_volume_list set. https://lists.fedorahosted.org/pipermail/lvm2-commits/2015-February/003577.html
Same for -pr if kernel is rw but metadata is already ro. https://lists.fedorahosted.org/pipermail/lvm2-commits/2015-February/003585.html
SCENARIO (raid1) - [kernel_perm_changes_tag_removal] Verify in-kernel permission changes are possible when the metadata setting is different Create a raid with tags that match what is present in the read_only_volume_list and then change the in-kernel permissions # in /etc/lvm/lvm.conf read_only_volume_list = [ "@RO" ] lvcreate --type raid1 -n kern_perm -L 300M --addtag RO raid_sanity /dev/raid_sanity/kern_perm: write failed after 0 of 4096 at 0: Operation not permitted kern_perm: attr=rRi-a-r--- kern_perm_rimage_0: attr=IRi-aor--- kern_perm_rimage_1: attr=IRi-aor--- Remove the tags from the raid volume, and attempt to change the in kernel RW permissions lvchange --deltag RO raid_sanity/kern_perm lvchange -prw raid_sanity/kern_perm kern_perm: attr=rwi-a-r--- kern_perm_rimage_0: attr=iwi-aor--- kern_perm_rimage_1: attr=iwi-aor--- Add the tags back to the raid volume, and attempt to change the in kernel RW permissions lvchange --addtag RO raid_sanity/kern_perm lvchange -pr raid_sanity/kern_perm kern_perm: attr=rri-a-r--- kern_perm_rimage_0: attr=iRi-aor--- kern_perm_rimage_1: attr=iRi-aor--- Shouldn't the top level raid volume now have the 'R' (Read-only activation of non-read-only volume) again like it did when originally created? [root@host-111 ~]# lvs -a -o +devices,lv_tags LV Attr LSize Cpy%Sync Devices LV Tags kern_perm rri-a-r--- 300.00m 100.00 kern_perm_rimage_0(0),kern_perm_rimage_1(0) RO [kern_perm_rimage_0] iRi-aor--- 300.00m /dev/sda2(1) [kern_perm_rimage_1] iRi-aor--- 300.00m /dev/sda1(1) [kern_perm_rmeta_0] eRi-aor--- 4.00m /dev/sda2(0) [kern_perm_rmeta_1] eRi-aor--- 4.00m /dev/sda1(0)
(In reply to Corey Marthaler from comment #8) > lvchange -prw raid_sanity/kern_perm > lvchange --addtag RO raid_sanity/kern_perm > lvchange -pr raid_sanity/kern_perm This changes both on-disk and in-kernel perms to read-only. This bugzilla is about making 'lvchange -p' check (and change) the in-kernel permissions even if the on-disk permissions are already in the desired state.
Filed bug 1208269 for the inconsistencies in the attributes mentioned in comment #8.
Feature verified in the latest rpms. 2.6.32-546.el6.x86_64 lvm2-2.02.118-1.el6 BUILT: Tue Mar 24 08:25:21 CDT 2015 lvm2-libs-2.02.118-1.el6 BUILT: Tue Mar 24 08:25:21 CDT 2015 lvm2-cluster-2.02.118-1.el6 BUILT: Tue Mar 24 08:25:21 CDT 2015 udev-147-2.61.el6 BUILT: Mon Mar 2 05:08:11 CST 2015 device-mapper-1.02.95-1.el6 BUILT: Tue Mar 24 08:25:21 CDT 2015 device-mapper-libs-1.02.95-1.el6 BUILT: Tue Mar 24 08:25:21 CDT 2015 device-mapper-event-1.02.95-1.el6 BUILT: Tue Mar 24 08:25:21 CDT 2015 device-mapper-event-libs-1.02.95-1.el6 BUILT: Tue Mar 24 08:25:21 CDT 2015 device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 08:43:06 CDT 2014 cmirror-2.02.118-1.el6 BUILT: Tue Mar 24 08:25:21 CDT 2015 # RO -> RW lvcreate --type raid1 -n kern_perm -L 300M --addtag RO raid_sanity /dev/raid_sanity/kern_perm: write failed after 0 of 4096 at 0: Operation not permitted [root@host-111 ~]# lvs -a -o +devices LV Attr LSize Cpy%Sync Devices kern_perm rRi-a-r--- 300.00m 100.00 kern_perm_rimage_0(0),kern_perm_rimage_1(0) [kern_perm_rimage_0] iRi-aor--- 300.00m /dev/sda2(1) [kern_perm_rimage_1] iRi-aor--- 300.00m /dev/sda1(1) [kern_perm_rmeta_0] eRi-aor--- 4.00m /dev/sda2(0) [kern_perm_rmeta_1] eRi-aor--- 4.00m /dev/sda1(0) lvchange --deltag RO raid_sanity/kern_perm [root@host-111 ~]# lvchange -prw raid_sanity/kern_perm Logical volume "kern_perm" is already writable. Refreshing kernel state. Logical volume "kern_perm" changed. [root@host-111 ~]# lvs -a -o +devices LV Attr LSize Cpy%Sync Devices kern_perm rwi-a-r--- 300.00m 100.00 kern_perm_rimage_0(0),kern_perm_rimage_1(0) [kern_perm_rimage_0] iwi-aor--- 300.00m /dev/sda2(1) [kern_perm_rimage_1] iwi-aor--- 300.00m /dev/sda1(1) [kern_perm_rmeta_0] ewi-aor--- 4.00m /dev/sda2(0) [kern_perm_rmeta_1] ewi-aor--- 4.00m /dev/sda1(0) # RW -> RO lvcreate --type raid1 -m 1 -n kern_perm -L 300M raid_sanity [root@host-111 ~]# lvs -a -o +devices LV Attr LSize Cpy%Sync Devices kern_perm rwi-a-r--- 300.00m 100.00 kern_perm_rimage_0(0),kern_perm_rimage_1(0) [kern_perm_rimage_0] iwi-aor--- 300.00m /dev/sda2(1) [kern_perm_rimage_1] iwi-aor--- 300.00m /dev/sda1(1) [kern_perm_rmeta_0] ewi-aor--- 4.00m /dev/sda2(0) [kern_perm_rmeta_1] ewi-aor--- 4.00m /dev/sda1(0) lvchange --addtag RO raid_sanity/kern_perm [root@host-111 ~]# lvchange -pr raid_sanity/kern_perm Logical volume "kern_perm" changed. [root@host-111 ~]# lvs -a -o +devices LV Attr LSize Cpy%Sync Devices kern_perm rri-a-r--- 300.00m 100.00 kern_perm_rimage_0(0),kern_perm_rimage_1(0) [kern_perm_rimage_0] iRi-aor--- 300.00m /dev/sda2(1) [kern_perm_rimage_1] iRi-aor--- 300.00m /dev/sda1(1) [kern_perm_rmeta_0] eRi-aor--- 4.00m /dev/sda2(0) [kern_perm_rmeta_1] eRi-aor--- 4.00m /dev/sda1(0)
Is this supposed to work with clvmd as well? (see bug 1210105)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1411.html