Bug 170705 - LVM needs volume ownership enforcement
LVM needs volume ownership enforcement
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.0
All Linux
medium Severity medium
: alpha
: ---
Assigned To: LVM and device-mapper development team
Cluster QE
: FutureFeature, Reopened
: 469956 (view as bug list)
Depends On:
Blocks: 655920 233117 729764 756082 1246231
  Show dependency treegraph
 
Reported: 2005-10-13 16:58 EDT by Corey Marthaler
Modified: 2015-07-23 14:33 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
: 1246231 (view as bug list)
Environment:
Last Closed: 2012-01-21 12:44:35 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2005-10-13 16:58:19 EDT
Description of problem:
We've discussed this issue before, filing a bz to have it documented.

LVM can't stop SAN invaders, we all know that one can always dd to mess with in
use devs, but with a volume manager one may expect more ownership control.

On a two node cluster (link-01 link-02):
[root@link-01 ~]# mount | grep gfs
/dev/mapper/gfs-gfs0 on /mnt/gfs0 type gfs (rw,acl)
[root@link-02 ~]# mount | grep gfs
/dev/mapper/gfs-gfs0 on /mnt/gfs0 type gfs (rw,acl)

[root@link-02 init.d]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "gfs" using metadata type lvm2


On another machine not in the cluster (link-08):
[root@link-08 lvm]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup00" using metadata type lvm2
  Found volume group "gfs" using metadata type lvm2
[root@link-08 lvm]# vgdisplay
  --- Volume group ---
  VG Name               VolGroup00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               54.81 GB
  PE Size               32.00 MB
  Total PE              1754
  Alloc PE / Size       1752 / 54.75 GB
  Free  PE / Size       2 / 64.00 MB
  VG UUID               Jiq2dT-75ik-N4hq-kfXJ-ZKX3-KUN4-6xUvkx

  --- Volume group ---
  VG Name               gfs
  System ID
  Format                lvm2
  Metadata Areas        6
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                6
  Act PV                6
  VG Size               949.61 GB
  PE Size               4.00 MB
  Total PE              243100
  Alloc PE / Size       243100 / 949.61 GB
  Free  PE / Size       0 / 0
  VG UUID               2u3e5y-PbzE-nGY4-Y4Q5-b34j-tp7z-HdZked

[root@link-08 lvm]# ls /dev/gfs/gfs0
ls: /dev/gfs/gfs0: No such file or directory
[root@link-08 lvm]# vgchange -ay
  2 logical volume(s) in volume group "VolGroup00" now active
  1 logical volume(s) in volume group "gfs" now active
[root@link-08 lvm]# ls /dev/gfs/gfs0
/dev/gfs/gfs0

[root@link-08 lvm]# mkfs /dev/gfs/gfs0
mke2fs 1.35 (28-Feb-2004)
max_blocks 4294967295, rsv_groups = 131072, rsv_gdb = 964
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
124469248 inodes, 248934400 blocks
12446720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7597 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done
inode.i_blocks = 146536, i_size = 4243456
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@link-08 lvm]# mkdir /mnt/ext0
[root@link-08 lvm]# mount /dev/gfs/gfs0 /mnt/ext0
[root@link-08 lvm]# mount | grep gfs
/dev/mapper/gfs-gfs0 on /mnt/ext0 type ext2 (rw)
[root@link-08 lvm]# umount /mnt/ext0/
[root@link-08 lvm]# lvremove /dev/gfs/gfs0
Do you really want to remove active logical volume "gfs0"? [y/n]: y
  Logical volume "gfs0" successfully removed
[root@link-08 lvm]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup00" using metadata type lvm2
  Found volume group "gfs" using metadata type lvm2
[root@link-08 lvm]# vgremove gfs
  Volume group "gfs" successfully removed
[root@link-08 lvm]# pvscan
  PV /dev/hda6   VG VolGroup00   lvm2 [54.81 GB / 64.00 MB free]
  PV /dev/sda1                   lvm2 [135.66 GB]
  PV /dev/sdb1                   lvm2 [135.66 GB]
  PV /dev/sdc1                   lvm2 [135.66 GB]
  PV /dev/sdd1                   lvm2 [135.66 GB]
  PV /dev/sde1                   lvm2 [271.33 GB]
  PV /dev/sdf1                   lvm2 [135.66 GB]
  Total: 7 [1004.43 GB] / in use: 1 [54.81 GB] / in no VG: 6 [949.62 GB]
[root@link-08 lvm]# vgcreate myvol /dev/sdf1 /dev/sde1 /dev/sdd1 /dev/sdc1
/dev/sdb1 /dev/sda1
  Volume group "myvol" successfully created
[root@link-08 lvm]# lvcreate -l 243100 myvol
  Logical volume "lvol0" created
[root@link-08 lvm]# mkfs /dev/myvol/lvol0
mke2fs 1.35 (28-Feb-2004)
max_blocks 4294967295, rsv_groups = 131072, rsv_gdb = 964
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
124469248 inodes, 248934400 blocks
12446720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7597 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done
inode.i_blocks = 146536, i_size = 4243456
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
Comment 1 Dean Jansa 2005-10-13 17:15:41 EDT
Tags provide a means to protect against this, but it puts the burden on each
site/admin.  If all nodes on site do not use the same set of tools (i.e. someone
uses the base lvm2 tools) you can get in the above situation.

A basic policy enforced as default in the lvm2 tools would minimize the risk.
Comment 2 Alasdair Kergon 2005-11-22 10:54:01 EST
Treating this as a core LVM2 enhancement request.
Comment 3 Alasdair Kergon 2005-11-22 10:58:55 EST
Quotes from several people in email discussion:

> Isn't this assuming that you had a cluster to begin with?  The cluster       
                                                     
> flag gets set on a volume if the cluster infrastructure is being used        
                                                     
> when the create operation takes place (or the user can set it).  If you      
                                                     
> have two nodes that see the same storage and they are not running            
                                                     
> cluster software, the clustered flag will not get set and there will         
                                                     
> never be warnings.                                                           
                                                     


> The situation here is a SAN with many HBA connected. Some of the             
                                               
> connected nodes are part of a cluster, some are not. In this case a          
                                                     
> warning could be of use e.g. when the SAN is reconfigured and some nodes     
                                                      
> start seeing VG that they should not.                                        
                                                  

A similar problem applies to the case where you have two different
clusters on the same LAN/SAN.  A cluster-VG should be allowed to be used
in one cluster but not the other.


So the name of the host that last used an unclustered VG is stored in
the VG metadata, and it remains completely invisible to other nodes
unless a special reassignment command is run?  Could merge with
vgexport/vgimport.  Needs carefully thinking through (machine renames,
swapping root disk between machines).


Situation could probably already be handled using tags:
Enable hosttags and tag each VG with the name of the machine(s) using
it.


Comment 9 RHEL Product and Program Management 2007-03-09 20:13:24 EST
This bugzilla had previously been approved for engineering
consideration but Red Hat Product Management is currently reevaluating
this issue for inclusion in RHEL4.6.
Comment 17 RHEL Product and Program Management 2008-02-28 18:05:58 EST
Development Management has reviewed and declined this request.  You may appeal
this decision by reopening this request. 
Comment 18 Tom Coughlan 2008-02-29 13:23:43 EST
I'll re-open this as a new feature request, targeted at RHEL 6  
Comment 19 Jonathan Earl Brassow 2009-05-21 10:22:31 EDT
*** Bug 469956 has been marked as a duplicate of this bug. ***
Comment 26 Alasdair Kergon 2011-02-11 15:38:45 EST
Needs a team discussion to agree on the way forward.
Comment 30 Sayan Saha 2012-01-21 12:44:35 EST
Currently there are no plans to address this in RHEL 6 and beyond. Closing this RFE with a WONTFIX resolution.

Note You need to log in before you can comment on or make changes to this bug.