RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 170705 - LVM needs volume ownership enforcement
Summary: LVM needs volume ownership enforcement
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: alpha
: ---
Assignee: LVM and device-mapper development team
QA Contact: Cluster QE
URL:
Whiteboard:
: 469956 (view as bug list)
Depends On:
Blocks: 233117 655920 729764 756082 1246231
TreeView+ depends on / blocked
 
Reported: 2005-10-13 20:58 UTC by Corey Marthaler
Modified: 2018-10-20 02:22 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
: 1246231 (view as bug list)
Environment:
Last Closed: 2012-01-21 17:44:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2005-10-13 20:58:19 UTC
Description of problem:
We've discussed this issue before, filing a bz to have it documented.

LVM can't stop SAN invaders, we all know that one can always dd to mess with in
use devs, but with a volume manager one may expect more ownership control.

On a two node cluster (link-01 link-02):
[root@link-01 ~]# mount | grep gfs
/dev/mapper/gfs-gfs0 on /mnt/gfs0 type gfs (rw,acl)
[root@link-02 ~]# mount | grep gfs
/dev/mapper/gfs-gfs0 on /mnt/gfs0 type gfs (rw,acl)

[root@link-02 init.d]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "gfs" using metadata type lvm2


On another machine not in the cluster (link-08):
[root@link-08 lvm]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup00" using metadata type lvm2
  Found volume group "gfs" using metadata type lvm2
[root@link-08 lvm]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               54.81 GB
  PE Size               32.00 MB
  Total PE              1754
  Alloc PE / Size       1752 / 54.75 GB
  Free  PE / Size       2 / 64.00 MB
  VG UUID               Jiq2dT-75ik-N4hq-kfXJ-ZKX3-KUN4-6xUvkx

  --- Volume group ---
  VG Name               gfs
  System ID
  Format                lvm2
  Metadata Areas        6
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                6
  Act PV                6
  VG Size               949.61 GB
  PE Size               4.00 MB
  Total PE              243100
  Alloc PE / Size       243100 / 949.61 GB
  Free  PE / Size       0 / 0
  VG UUID               2u3e5y-PbzE-nGY4-Y4Q5-b34j-tp7z-HdZked

[root@link-08 lvm]# ls /dev/gfs/gfs0
ls: /dev/gfs/gfs0: No such file or directory
[root@link-08 lvm]# vgchange -ay
  2 logical volume(s) in volume group "VolGroup00" now active
  1 logical volume(s) in volume group "gfs" now active
[root@link-08 lvm]# ls /dev/gfs/gfs0
/dev/gfs/gfs0

[root@link-08 lvm]# mkfs /dev/gfs/gfs0
mke2fs 1.35 (28-Feb-2004)
max_blocks 4294967295, rsv_groups = 131072, rsv_gdb = 964
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
124469248 inodes, 248934400 blocks
12446720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7597 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done
inode.i_blocks = 146536, i_size = 4243456
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@link-08 lvm]# mkdir /mnt/ext0
[root@link-08 lvm]# mount /dev/gfs/gfs0 /mnt/ext0
[root@link-08 lvm]# mount | grep gfs
/dev/mapper/gfs-gfs0 on /mnt/ext0 type ext2 (rw)
[root@link-08 lvm]# umount /mnt/ext0/
[root@link-08 lvm]# lvremove /dev/gfs/gfs0
Do you really want to remove active logical volume "gfs0"? [y/n]: y
  Logical volume "gfs0" successfully removed
[root@link-08 lvm]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup00" using metadata type lvm2
  Found volume group "gfs" using metadata type lvm2
[root@link-08 lvm]# vgremove gfs
  Volume group "gfs" successfully removed
[root@link-08 lvm]# pvscan
  PV /dev/hda6   VG VolGroup00   lvm2 [54.81 GB / 64.00 MB free]
  PV /dev/sda1                   lvm2 [135.66 GB]
  PV /dev/sdb1                   lvm2 [135.66 GB]
  PV /dev/sdc1                   lvm2 [135.66 GB]
  PV /dev/sdd1                   lvm2 [135.66 GB]
  PV /dev/sde1                   lvm2 [271.33 GB]
  PV /dev/sdf1                   lvm2 [135.66 GB]
  Total: 7 [1004.43 GB] / in use: 1 [54.81 GB] / in no VG: 6 [949.62 GB]
[root@link-08 lvm]# vgcreate myvol /dev/sdf1 /dev/sde1 /dev/sdd1 /dev/sdc1
/dev/sdb1 /dev/sda1
  Volume group "myvol" successfully created
[root@link-08 lvm]# lvcreate -l 243100 myvol
  Logical volume "lvol0" created
[root@link-08 lvm]# mkfs /dev/myvol/lvol0
mke2fs 1.35 (28-Feb-2004)
max_blocks 4294967295, rsv_groups = 131072, rsv_gdb = 964
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
124469248 inodes, 248934400 blocks
12446720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7597 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done
inode.i_blocks = 146536, i_size = 4243456
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Comment 1 Dean Jansa 2005-10-13 21:15:41 UTC
Tags provide a means to protect against this, but it puts the burden on each
site/admin.  If all nodes on site do not use the same set of tools (i.e. someone
uses the base lvm2 tools) you can get in the above situation.

A basic policy enforced as default in the lvm2 tools would minimize the risk.

Comment 2 Alasdair Kergon 2005-11-22 15:54:01 UTC
Treating this as a core LVM2 enhancement request.

Comment 3 Alasdair Kergon 2005-11-22 15:58:55 UTC
Quotes from several people in email discussion:

> Isn't this assuming that you had a cluster to begin with?  The cluster       
                                                     
> flag gets set on a volume if the cluster infrastructure is being used        
                                                     
> when the create operation takes place (or the user can set it).  If you      
                                                     
> have two nodes that see the same storage and they are not running            
                                                     
> cluster software, the clustered flag will not get set and there will         
                                                     
> never be warnings.                                                           
                                                     


> The situation here is a SAN with many HBA connected. Some of the             
                                               
> connected nodes are part of a cluster, some are not. In this case a          
                                                     
> warning could be of use e.g. when the SAN is reconfigured and some nodes     
                                                      
> start seeing VG that they should not.                                        
                                                  

A similar problem applies to the case where you have two different
clusters on the same LAN/SAN.  A cluster-VG should be allowed to be used
in one cluster but not the other.


So the name of the host that last used an unclustered VG is stored in
the VG metadata, and it remains completely invisible to other nodes
unless a special reassignment command is run?  Could merge with
vgexport/vgimport.  Needs carefully thinking through (machine renames,
swapping root disk between machines).


Situation could probably already be handled using tags:
Enable hosttags and tag each VG with the name of the machine(s) using
it.




Comment 9 RHEL Program Management 2007-03-10 01:13:24 UTC
This bugzilla had previously been approved for engineering
consideration but Red Hat Product Management is currently reevaluating
this issue for inclusion in RHEL4.6.

Comment 17 RHEL Program Management 2008-02-28 23:05:58 UTC
Development Management has reviewed and declined this request.  You may appeal
this decision by reopening this request. 

Comment 18 Tom Coughlan 2008-02-29 18:23:43 UTC
I'll re-open this as a new feature request, targeted at RHEL 6  

Comment 19 Jonathan Earl Brassow 2009-05-21 14:22:31 UTC
*** Bug 469956 has been marked as a duplicate of this bug. ***

Comment 26 Alasdair Kergon 2011-02-11 20:38:45 UTC
Needs a team discussion to agree on the way forward.

Comment 30 Sayan Saha 2012-01-21 17:44:35 UTC
Currently there are no plans to address this in RHEL 6 and beyond. Closing this RFE with a WONTFIX resolution.


Note You need to log in before you can comment on or make changes to this bug.