Bug 170705
| Summary: | LVM needs volume ownership enforcement | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> | |
| Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> | |
| Status: | CLOSED WONTFIX | QA Contact: | Cluster QE <mspqa-list> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 6.0 | CC: | agk, ccaulfie, coughlan, cww, dwysocha, mbroz, ssaha, tao | |
| Target Milestone: | alpha | Keywords: | FutureFeature, Reopened | |
| Target Release: | --- | |||
| Hardware: | All | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | Enhancement | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1246231 (view as bug list) | Environment: | ||
| Last Closed: | 2012-01-21 17:44:35 UTC | Type: | --- | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 233117, 655920, 729764, 756082, 1246231 | |||
Tags provide a means to protect against this, but it puts the burden on each site/admin. If all nodes on site do not use the same set of tools (i.e. someone uses the base lvm2 tools) you can get in the above situation. A basic policy enforced as default in the lvm2 tools would minimize the risk. Treating this as a core LVM2 enhancement request. Quotes from several people in email discussion: > Isn't this assuming that you had a cluster to begin with? The cluster > flag gets set on a volume if the cluster infrastructure is being used > when the create operation takes place (or the user can set it). If you > have two nodes that see the same storage and they are not running > cluster software, the clustered flag will not get set and there will > never be warnings. > The situation here is a SAN with many HBA connected. Some of the > connected nodes are part of a cluster, some are not. In this case a > warning could be of use e.g. when the SAN is reconfigured and some nodes > start seeing VG that they should not. A similar problem applies to the case where you have two different clusters on the same LAN/SAN. A cluster-VG should be allowed to be used in one cluster but not the other. So the name of the host that last used an unclustered VG is stored in the VG metadata, and it remains completely invisible to other nodes unless a special reassignment command is run? Could merge with vgexport/vgimport. Needs carefully thinking through (machine renames, swapping root disk between machines). Situation could probably already be handled using tags: Enable hosttags and tag each VG with the name of the machine(s) using it. This bugzilla had previously been approved for engineering consideration but Red Hat Product Management is currently reevaluating this issue for inclusion in RHEL4.6. Development Management has reviewed and declined this request. You may appeal this decision by reopening this request. I'll re-open this as a new feature request, targeted at RHEL 6 *** Bug 469956 has been marked as a duplicate of this bug. *** Needs a team discussion to agree on the way forward. Currently there are no plans to address this in RHEL 6 and beyond. Closing this RFE with a WONTFIX resolution. |
Description of problem: We've discussed this issue before, filing a bz to have it documented. LVM can't stop SAN invaders, we all know that one can always dd to mess with in use devs, but with a volume manager one may expect more ownership control. On a two node cluster (link-01 link-02): [root@link-01 ~]# mount | grep gfs /dev/mapper/gfs-gfs0 on /mnt/gfs0 type gfs (rw,acl) [root@link-02 ~]# mount | grep gfs /dev/mapper/gfs-gfs0 on /mnt/gfs0 type gfs (rw,acl) [root@link-02 init.d]# vgscan Reading all physical volumes. This may take a while... Found volume group "gfs" using metadata type lvm2 On another machine not in the cluster (link-08): [root@link-08 lvm]# vgscan Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Found volume group "gfs" using metadata type lvm2 [root@link-08 lvm]# vgdisplay --- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 54.81 GB PE Size 32.00 MB Total PE 1754 Alloc PE / Size 1752 / 54.75 GB Free PE / Size 2 / 64.00 MB VG UUID Jiq2dT-75ik-N4hq-kfXJ-ZKX3-KUN4-6xUvkx --- Volume group --- VG Name gfs System ID Format lvm2 Metadata Areas 6 Metadata Sequence No 2 VG Access read/write VG Status resizable Clustered yes Shared no MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 6 Act PV 6 VG Size 949.61 GB PE Size 4.00 MB Total PE 243100 Alloc PE / Size 243100 / 949.61 GB Free PE / Size 0 / 0 VG UUID 2u3e5y-PbzE-nGY4-Y4Q5-b34j-tp7z-HdZked [root@link-08 lvm]# ls /dev/gfs/gfs0 ls: /dev/gfs/gfs0: No such file or directory [root@link-08 lvm]# vgchange -ay 2 logical volume(s) in volume group "VolGroup00" now active 1 logical volume(s) in volume group "gfs" now active [root@link-08 lvm]# ls /dev/gfs/gfs0 /dev/gfs/gfs0 [root@link-08 lvm]# mkfs /dev/gfs/gfs0 mke2fs 1.35 (28-Feb-2004) max_blocks 4294967295, rsv_groups = 131072, rsv_gdb = 964 Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 124469248 inodes, 248934400 blocks 12446720 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 7597 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: done inode.i_blocks = 146536, i_size = 4243456 Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 34 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@link-08 lvm]# mkdir /mnt/ext0 [root@link-08 lvm]# mount /dev/gfs/gfs0 /mnt/ext0 [root@link-08 lvm]# mount | grep gfs /dev/mapper/gfs-gfs0 on /mnt/ext0 type ext2 (rw) [root@link-08 lvm]# umount /mnt/ext0/ [root@link-08 lvm]# lvremove /dev/gfs/gfs0 Do you really want to remove active logical volume "gfs0"? [y/n]: y Logical volume "gfs0" successfully removed [root@link-08 lvm]# vgscan Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Found volume group "gfs" using metadata type lvm2 [root@link-08 lvm]# vgremove gfs Volume group "gfs" successfully removed [root@link-08 lvm]# pvscan PV /dev/hda6 VG VolGroup00 lvm2 [54.81 GB / 64.00 MB free] PV /dev/sda1 lvm2 [135.66 GB] PV /dev/sdb1 lvm2 [135.66 GB] PV /dev/sdc1 lvm2 [135.66 GB] PV /dev/sdd1 lvm2 [135.66 GB] PV /dev/sde1 lvm2 [271.33 GB] PV /dev/sdf1 lvm2 [135.66 GB] Total: 7 [1004.43 GB] / in use: 1 [54.81 GB] / in no VG: 6 [949.62 GB] [root@link-08 lvm]# vgcreate myvol /dev/sdf1 /dev/sde1 /dev/sdd1 /dev/sdc1 /dev/sdb1 /dev/sda1 Volume group "myvol" successfully created [root@link-08 lvm]# lvcreate -l 243100 myvol Logical volume "lvol0" created [root@link-08 lvm]# mkfs /dev/myvol/lvol0 mke2fs 1.35 (28-Feb-2004) max_blocks 4294967295, rsv_groups = 131072, rsv_gdb = 964 Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 124469248 inodes, 248934400 blocks 12446720 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 7597 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: done inode.i_blocks = 146536, i_size = 4243456 Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.